Friday, May 9, 2014

Restore all SQL Server backups from a directory using Python

Problem:

I had to setup database mirroring for 50 databases on amazon EC2 servers which are in different availability zones. Mirroring needs to be initiated with a full and at least one transaction log restores. Restoring 50 databases manually is a cumbersome task. Some of these databases have multiple data files and the logical names are different. I thought it would be better if I could generate restore scripts for all backups in a backup folder and that is a perfect excuse to try Python at work. I used pyodbc to make connection to database. If you dont have pyodbc module, you might want to install it before trying the script. The script can handle databases with multiple data files. Default target data file/logfile directories can be declared.

import os, sys
import pyodbc

backupdir = 'E:\\SQLData\\backups\\'
conn = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=DBAdmin;UID=sa;PWD=sysadmin')
cursor = conn.cursor()
DataDir = 'D:\SQLData\Data'
LogDir = 'E:\SQLData\Logs'


def iter_islast(row):
    it = iter(row)
    prev = it.next()
    for item in it:
        yield prev, False
        prev = item
    yield prev, True


for root, dirs, files in os.walk(backupdir, topdown = False):
    for name in files:
        TempSQL = ""
        f = os.path.join(root, name)
        print "Restore Script for backupfile " + f + " below: "
        print "==============================================="
        
        cursor.execute("restore headeronly from disk = '%s';" % f)
        
        rows = cursor.fetchall()
        for row in rows:
            TempSQL = "RESTORE DATABASE  '" + row.DatabaseName + "' FROM DISK = '" + f + "' , WITH \n"
        
        cursor.execute("restore filelistonly from disk = '%s';" % f)
        rows2 = cursor.fetchall()
         
        for row, islast in iter_islast(rows2):
            if islast:
                if row.Type == 'L':
                    TempSQL += " MOVE " + "'"+row.LogicalName+"' TO  '" + LogDir + "', NORECOVERY \n"
                else:
                    TempSQL += " MOVE " + "'"+row.LogicalName+"' TO  '" + DataDir + "', NORECOVERY \n"
            else:
                if row.Type == 'L':
                    TempSQL += " MOVE " + "'"+row.LogicalName+"' TO  '" + LogDir + "', \n"
                else:
                    TempSQL += " MOVE " + "'"+row.LogicalName+"' TO  '" + DataDir + "', \n"
                
        print TempSQL

The result will look like this. Just copy the commands and execute!




Thursday, April 24, 2014

Upload backups to Amazon S3 bucket using boto, Python

Here is a script to upload backups older than 1 day to S3 bucket. We would like to exclude transaction log backups so I used "if 'LOG' not in name" - you may change it accordingly.

import os, sys, time
from boto.s3.connection import S3Connection
from boto.s3.key import Key
conn = S3Connection('YOURKEY','SECRETKEY')
bucket = conn.get_bucket('MyBeautifulBucket')
age = 86400 #1 day = 24*60*60
backupdir = 'O:\Backups\'

for root, dirs, files in os.walk(backupdir, topdown=False):
    for name in files:
        if 'LOG' not in name: #exclude copying transaction log backups
            f = os.path.join(root, name)
            now = time.time()
            modified = os.stat(f).st_mtime
            if modified > now - age:
                k=Key(bucket)
                k.key= 'mySubBucket/' + str(name)
                k.set_contents_from_filename(f)

Saturday, April 12, 2014

Run a query on multiple schemas postgres

If you have one huge database with multiple schemata in it, one for each customer (single database, multi tenant model), then from time to time you might have to run a query on all schemata, for example - getting a build number or user count etc. You could either write a shell script to bring all tenants from a tenant list and build sql queries to execute dynamically or you could use a function in postgres pl sql. Below is the function to get a count of users from all schemata, querying the table tUser:

-- create a record for the returned data 
DROP FUNCTION IF EXISTS count_em_all ();
DROP TYPE IF EXISTS table_count;
CREATE TYPE table_count AS (table_name TEXT, num_rows INTEGER); 

   CREATE OR REPLACE FUNCTION count_em_all () RETURNS SETOF table_count  AS 
   $$
   DECLARE 
       the_count RECORD; 
       t_name RECORD; 
       r table_count%ROWTYPE; 

   BEGIN
       -- list the schemas
       FOR t_name IN 
         select distinct schemaname
         from pg_catalog.pg_tables
         where tablename = 'tUser' --and schemaname not ilike 'something|_you|_liketofilter%' escape '|'      
           LOOP
               -- Run your query on each schema
               FOR the_count IN EXECUTE 'SELECT COUNT(*) AS "count" FROM "'||t_name.schemaname||'".tUser'
               LOOP 
               END LOOP; 

               r.table_name := t_name.schemaname; 
               r.num_rows := the_count.count; 
               RETURN NEXT r; 
           END LOOP; 
           RETURN; 
   END;
   $$ LANGUAGE plpgsql;

select count_em_all();  

SQL Server: Setup Mirroring Step by Step using TSQL

It is 2014 and people should be using AlwaysOn instead of mirroring. But some clients are still on SQL Server 2008R2 and I am about to setup mirroring for one such customer in a couple of days and posting these steps here won't hurt. There is a handy GUI wizard that lets you create mirroring pretty easily but if you have to mirror around 30 databases, using scripts is lot easier.

Our setup is simple. SQL01 and SQL02 are our two servers - SQL01 is the principal and SQL02 is the mirror. We are not using a witness for this setup.

Steps on Mirror Server (SQL02)

1. On SQL01, take a full backup and also a transaction log backup of the database you would like to mirror. (So your database would have to be in FULL recovery mode, otherwise no mirroring for you!)

2. Restore database on SQL02
Use master;
RESTORE DATABASE [MirrorMe] FROM DISK = N'K:\Backups\MirrorMe_041214.bak' WITH FILE = 1,  
NORECOVERY,  NOUNLOAD,  STATS = 10;
RESTORE LOG [MirrorMe] FROM DISK = N'K:\Backups\MirrorMe_log.trn' WITH FILE = 1,  
NORECOVERY,  NOUNLOAD,  STATS = 10;
GO

3. Create an endpoint for mirror instance, SQL02. An endpoint is necessary for database mirroring/alwayson to recieve connections from other servers.
CREATE ENDPOINT [endpoint_SQL02] 
 AUTHORIZATION [BINYOGA\yogirk]
 STATE=STARTED
 AS TCP (LISTENER_PORT = 7024, LISTENER_IP = ALL)
 FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = WINDOWS NEGOTIATE, ENCRYPTION = REQUIRED ALGORITHM RC4);
GO

4. Create a login for using in mirroring setup and grant connect on the endpoint.

USE [master]
CREATE LOGIN [BINYOGA\iMirror] FROM WINDOWS WITH DEFAULT_DATABASE=[master]
GRANT CONNECT on ENDPOINT::endpoint_SQL02 TO [BINYOGA\iMirror];
GO

5. You might want to create and maintain same set of logins/users on both Principal and Mirror server. When you do a failover, you can simply fix orphaned users and your application can keep talking to the database.

SELECT 
  'create login [' + p.name + '] ' + 
  case when p.type in('U','G') then 'from windows ' else '' end +
  'with ' +
  case when p.type = 'S' then 'password = ' + master.sys.fn_varbintohexstr(l.password_hash) + ' hashed, ' +
  'sid = ' + master.sys.fn_varbintohexstr(l.sid) + 
  ', check_expiration = ' + case when l.is_expiration_checked > 0 then 'ON, ' else 'OFF, ' end + 
  'check_policy = ' + case when l.is_policy_checked > 0 then 'ON, ' else 'OFF, ' end +
  case when l.credential_id > 0 then 'credential = ' + c.name + ', ' else '' end 
  else '' end +
  'default_database = ' + p.default_database_name +
  case when len(p.default_language_name) > 0 then ', default_language = ' + p.default_language_name else '' end
FROM sys.server_principals p 
LEFT JOIN sys.sql_logins l ON p.principal_id = l.principal_id 
LEFT JOIN sys.credentials c ON  l.credential_id = c.credential_id
WHERE p.type in('S','U','G')
  AND p.name <> 'sa'
  --AND p.name in ('myLogin1','myLogin2' );

6. Add the databases to Mirroring. You'll notice that the endpoint is set to 7023 and that is yet to be created on our Primary server, SQL01.

USE master;
GO
--assuming that you have already restored these databases and at least one transaction log backup with NORECOVERY 
ALTER DATABASE MirrorMe01 SET PARTNER = 'TCP://10.10.2.10:7023';
ALTER DATABASE MirrorMe02 SET PARTNER = 'TCP://10.10.2.10:7023';
ALTER DATABASE MirrorMe03 SET PARTNER = 'TCP://10.10.2.10:7023';
ALTER DATABASE MirrorMe04 SET PARTNER = 'TCP://10.10.2.10:7023';  

Steps on Principal Server (SQL01)

1. Now switch to Primary server, in this case SQL01 and create an endpoint

--you can check existing endpoints, if any using below query
SELECT name, role_desc, state_desc FROM sys.database_mirroring_endpoints;
SELECT name, port FROM sys.tcp_endpoints;
--create an endpoint

CREATE ENDPOINT [endpoint_SQL01] 
 AUTHORIZATION [BINYOGA\yogirk]
 STATE=STARTED
 AS TCP (LISTENER_PORT = 7023, LISTENER_IP = ALL)
 FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = WINDOWS NEGOTIATE, ENCRYPTION = REQUIRED ALGORITHM RC4);
GO

2.Mirror login
USE [master]
CREATE LOGIN [BINYOGA\iMirror] FROM WINDOWS WITH DEFAULT_DATABASE=[master]
GRANT CONNECT on ENDPOINT::endpoint_SQL02 TO [BINYOGA\iMirror];
GO


3. Add databases to Mirroring

USE master;
ALTER DATABASE MirrorMe01 SET PARTNER = 'TCP://10.10.2.21:7024';
ALTER DATABASE MirrorMe02 SET PARTNER = 'TCP://10.10.2.21:7024';
ALTER DATABASE MirrorMe03 SET PARTNER = 'TCP://10.10.2.21:7024';
ALTER DATABASE MirrorMe04 SET PARTNER = 'TCP://10.10.2.21:7024'; 
GO

Done, you should now see the databases in mirrored state.

Saturday, April 5, 2014

Shell script: Analyze schema (update statistics) - PostgreSQL

Analyze in PostgreSQL is like sp_updatestats of SQL Server. Analyze gathers statistics about number of rows in a table and stores them in pg_statistics system catalog, which are used by query planner to create optimal query plans. Below is a shell script that is useful to update statistics on a schema. If you are on a single database and multi-tenant model, this is pretty useful to update the target tenant schema, without updating the entire database.

Before executing this, you might want to add postgres user, password and host to your pgpass file, so that you can execute this shell script without being prompted for password.

vi /root/.pgpass 

#Add details to file hostname:port:database:username:password
*:*:*:dbadmin:SysAdminPwd

#set file permissions
chmod 600 /root/.pgpass

Now the shell script: You can download script from here

#!/bin/sh
#
#
# Title    : analyze_schema.sh
# Description:  
# Shell script 
#  
# Options:
#  Schema name that requires a statistic update
#
# Requirements:
#  Installed psql and added path to binary in PATH system variable. 
#  
# Examples:
#  $HOME/analyze_schema.sh "MySchemaName"
#

SCHEMANAME="$1"
DBNAME="${2:-MyDatabase}"
DBUSER="${3:-dbadmin}"
DBPORT="${4:-7432}"
 
ANALYZESQL="anaylze_schema.sql"
ANALYZELOG="analyze_schema.log"

test=$(psql -A -1 -t -U $DBUSER -d $DBNAME -p $DBPORT -c "select * from pg_namespace where nspname='$SCHEMANAME'")
if [[ "$test" == "" ]]; then
  echo "Schema \"$SCHEMANAME\" in db \"$DBNAME\" for dbuser \"$DBUSER\" and port \"$DBPORT\" does not exists"
  exit 1
fi

GENANALYZESQL="
select 'ANALYZE VERBOSE \"'||current_database()||'\".\"'||t.schemaname||'\".\"'||t.tablename||'\"; ' as analyze_statement
from pg_tables t
where schemaname = '$SCHEMANAME'
order by t.tablename;
"

echo "Started generating analyze script for Schema \"$SCHEMANAME\" started at $(date +'%Y-%m-%d %T')"
psql -A -q -1 -t -p 7432 -U dbadmin -d MyDatabase -c "$GENANALYZESQL" > "$ANALYZESQL" 2>"$ANALYZELOG"
if ! [[ "$(cat $ANALYZELOG)" == "" ]]; then
  echo "Generation of analyze script for schema \"$SCHEMANAME\" in db \"$DBNAME\" for dbuser \"$DBUSER\" and port \"$DBPORT\" had error(s):"
  cat "$ANALYZELOG"
  exit 1
fi
echo "Analyzing schema \"$SCHEMANAME\" started at $(date +'%Y-%m-%d %T')"
psql -A -q -1 -t -p 7432 -U dbadmin -d MyDatabase -f "$ANALYZESQL" 2>&1 | /usr/sbin/prepend_datetime.sh >> "$ANALYZELOG"
echo "Analyzing schema \"$SCHEMANAME\" ended at $(date +'%Y-%m-%d %T')"
echo "Analyzing schema sql script is in file $ANALYZESQL. Analyze log is below:"
cat "$ANALYZELOG"

Tuesday, April 1, 2014

Delete files older than x days - Python, bash etc

Our postgres logs directory had logs from almost 6 months and we only needed 5 days worth of data. Its pretty straightforward to remove stuff using bash:

find /db/logs/archive_data/* -type f -mtime +5 | xargs rm

But find sometimes fails with the error "too many arguments" when it has to deal with too many files.

Here is another way in Python:

Example usage: Save this as cleanup.py and run - python cleanup.py /db/data 5
from sys import argv
import os, time

script, dir, age = argv
print "Searching directory %s for file older than %s day(s)" % (str(argv[1]), str(argv[2]))

#convert age to sec. 1 day = 24*60*60
age = int(age)*86400

for file in os.listdir(dir):
    now = time.time()
    filepath = os.path.join(dir, file)
    modified = os.stat(filepath).st_mtime
    if modified < now - age:
        if os.path.isfile(filepath):
            os.remove(filepath)
            print 'Deleted: %s (%s)' % (file, modified)

Sunday, March 30, 2014

SQL Server Database Mirroring: automating mirror failovers

Problem:

You have a bunch of databases on a production server and you decided to go with SQL Server mirroring as a DR solution. Your IT dept, in its infinite wisdom, thought that it is a waste of resources to have a witness server and you were asked to somehow adjust with a Principal and a Mirror server. Your databases are critical and though the option of automated failovers is ruled out due to lack of witness, you would like to put process in place such that you are always battle ready to deal with a possible disaster and recover quickly. The solution should be designed in such a way that even the most junior of your DBAs should be able to handle it.

Solution:

For the sake of this explanation, lets us assume my production server is YIN and Mirror server is YANG. So what happens when a disaster occurs? Assuming that you already have mirroring setup between YIN and YANG and it is functional, you might have to do the following after a disaster.

  • When YIN goes down, databases on YANG server will be in restoring/disconnected state. You'll bring all those databases online  by performing a forced failover with allow data loss.
  • When the failover is done, you might want to fix orphaned users
  • You now make necessary changes to whichever mechanism your application uses to connect to the databases to point to the new server. In our case, we store database names and connection strings in a SQL that sits on a well maintained cluster (lets call this MOTHERSHIP), so we update those connection strings with IP address of YANG. 
  • We have lot of business specific SQL Agent jobs running on our production servers apart from regular maint jobs. These jobs exist both on YIN and YANG. The jobs are kept in disabled state on mirror server. Now that we had a disaster and we failover to the evil YANG, we might have to enable these jobs. 
  • Send a notification that failover of some sort occurred. 
What Next?

Ok, you did all of that and we were back in business with minimal downtime. Our former Principal server was resurrected and is back online. YANG is currently the Principal and you might want to add YIN back to mirroring setup as the new mirror. After a couple of days, you are sure that YIN is now stable and it should be primary because it probably has better resources compared to YANG. You might have to do a soft (planned) failover. So following are various scenarios that are possible in a setup like this:
  • Planned Failover from Yin to Yang. Stop jobs on Yin, enable jobs on Yang, fix users, update connection strings
  • Planned Failover from  Yang to Yin etc.,
  • Forced Failover on Yin
  • Forced Failover on Yang
  • Resume Mirroring on Yin
  • Resume Mirroring on Yang
Enough of theory. Lets get our hands dirty. Creating batch scripts for each of these scenarios appeared like the easiest simplest (if not elegant) solution to me. The actual logic that performs a failover can be a stored procedures and the batch scripts can be stored on a server which is highly available. 

Create these stored procedures on both Principal and Mirror Servers.
 Example procedure below. Click links above for other sps. 

USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[usp_ForcedFailover]
AS

/* stored procedure for doing a forced failover of all database to YANG
example usage:  EXEC master.dbo.usp_MirrorFailover
*/

DECLARE @sql NVARCHAR(300)
 ,@maxid INT
 ,@i INT
 ,@dbname SYSNAME
DECLARE @MirrorDBs TABLE (
 [dbid] INT identity(1, 1) NOT NULL PRIMARY KEY
 ,dbname SYSNAME NOT NULL
 )

SET NOCOUNT ON

INSERT INTO @MirrorDBs (dbname)
SELECT DB_NAME(database_id)
FROM sys.database_mirroring
WHERE mirroring_role = 2 -- Mirror
 AND mirroring_state IN (0,1) -- Disconnected or Suspended
 AND mirroring_safety_level = 2

--select * from @MirrorDBs
SELECT @maxid = max([dbid])
 ,@i = 1
FROM @MirrorDBs;

WHILE @i <= @maxid
BEGIN
 SELECT @dbname = dbname
 FROM @MirrorDBs
 WHERE [dbid] = @i

 SET @sql = 'ALTER DATABASE ' + QUOTENAME(@dbname) + ' SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS;'

 PRINT @sql

 EXEC (@sql);

 SET @i = @i + 1
END

SET NOCOUNT OFF


Next, create scripts for enabling/disabling jobs:
declare @sql nvarchar(max) = '';
select
@sql += N'exec msdb.dbo.sp_update_job @job_name = ''' + name + N''', @enabled = 1;
' from msdb.dbo.sysjobs
where enabled = 0
order by name;

print @sql;
exec (@sql);


Create scripts for fixing users

DECLARE @sql NVARCHAR(300)
 ,@maxid INT
 ,@i INT
 ,@dbname SYSNAME
DECLARE @Mirrors TABLE (
 [dbid] INT identity(1, 1) NOT NULL PRIMARY KEY
 ,dbname SYSNAME NOT NULL
 )

SET NOCOUNT ON

INSERT INTO @mirrors (dbname)
SELECT DB_NAME(database_id)
FROM sys.database_mirroring
WHERE mirroring_role = 1 -- Principal 
 AND mirroring_state = 4 -- Synchronized
 AND mirroring_safety_level = 2

--select * from @mirrors
SELECT @maxid = max([dbid])
 ,@i = 1
FROM @mirrors;

WHILE @i <= @maxid
BEGIN
 SELECT @dbname = dbname
 FROM @mirrors
 WHERE [dbid] = @i

 SET @sql = 'USE ' + QUOTENAME(@dbname) + 
 ' EXEC sp_change_users_login ''Auto_Fix'', ''myUser1'';
EXEC sp_change_users_login ''Auto_Fix'', ''myUser2'';
 '

 PRINT @sql

 EXEC (@sql);

 SET @i = @i + 1
END
 
Update connection strings on MOTHERSHIP server
USE ConnectionDB
GO
UPDATE COMPANY SET ConnectionString='Data Source=10.10.2.10;User ID=fancyuser;Password=fancypass;Max Pool Size=200' 
WHERE ConnectionString='Data Source=10.10.2.11;User ID=fancyuser;Password=fancypass;Max Pool Size=200';
Finally, the batch file that brings all these together:
@echo off
goto :startbatch

Description
=============================================================
This batch file is used for 
1. Performing a soft failover of YIN to YANG. YANG will become the primary
2. Fix Database users
3. Update connection strings in connectiondb database post failover
4. Disable jobs on YIN
5. Enable jobs on YANG
6. Send a notification email 
===============================================================

:startbatch

set source_server=10.10.2.10
set mothership_server=10.10.2.23
set mirror_server=10.10.2.11
set serverlogin=fancyuser
set serverpass=fancypass
set connectiondb=connectiondb

set home_dir=d:\mirrorme\Failover_YIN_TO_YANG
set failover_script=%home_dir%\failover.sql
set failover_out=%home_dir%\failover_out.txt
set fixusers_script=%home_dir%\fixusers.sql
set fixusers_out=%home_dir%\fixusers_out.txt
set update_connectiondb=%home_dir%\update_connectiondb.sql
set update_connectiondb_out=%home_dir%\update_connectiondb_out.txt
set disablejobs_script=%home_dir%\disablejobs.sql
set disablejobs_out=%home_dir%\disablejobs_out.txt
set enablejobs_script=%home_dir%\enablejobs.sql
set enablejobs_out=%home_dir%\enablejobs_out.txt




REM Failover from YIN to YANG
sqlcmd -S tcp:%source_server%,1433 -d master -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %failover_script% -o %failover_out%

REM Run fix users script on YANG to deal with orphan users
sqlcmd -S tcp:%mirror_server%,1433 -d master -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %fixusers_script% -o %fixusers_out%

REM update connectiondb database with new connection string
sqlcmd -S tcp:%connectiondb_server%,1433 -d %connectiondb% -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %update_connectiondb% -o %update_connectiondb_out%

REM disable jobs on YIN because we failedover to YANG
sqlcmd -S tcp:%source_server%,1433 -d master -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %disablejobs_script% -o %disablejobs_out%

REM Enable jobs on YANG since it is the new primary
sqlcmd -S tcp:%mirror_server%,1433 -d master -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %enablejobs_script% -o %enablejobs_out%


REM send emails on completion
C:\work\tool\sendemail -t allcoolsdudes@Nirvanainc.com -cc -u "Mirroring failover of YIN to YANG" -m "Database Mirroring Failover of YIN to YANG has been triggered and completed. SQL Agent jobs on YIN were disabled and jobs on YANG were enabled. Check logs attached. " -s 192.24.2.110:25 -f "LordOfServers@NirvanaInc.com" -a %failover_out% -a %fixusers_out% -a %update_connectiondb_out% -a %disablejobs_out% -a %enablejobs_out%


To do:

Update this post with scripts for all other possible failover scenarios:

Thursday, March 20, 2014

Batch Script: SQL Server Database Backup, Refresh, sanitize, run script and upload csv to sftp

One of the customers requested the following:

Backup a production database and restored it on a dev server. Sanitize the refreshed database and run a script and upload results as a csv file to sftp site. I know this can be done with SSIS, but I thought a batch job would be simpler to manage and troubleshoot.

First, create a directory for this job and add the following files to it. I will call mine D:\NirvanaInc. We are going to use the following tools to get this task done.

  • SQLCMD - To run backup/restore commands, sanitize scripts etc
  • PSFTP - To upload files to an SFTP site
  • sendemail - To send notification emails

Now that we are ready, lets start building the scripts we will use. The idea is to create one batch file which will call and run various scripts that do backup, restore, cleanup etc. this approach will make it incredibly easier to maintain and make changes to the job in future, if needed.

backup.sql

This is simple backup script:

USE master;

DECLARE @startdate VARCHAR(100);

SET @startdate = 'backup start: ' + convert(VARCHAR(32), getdate(), 121);

PRINT @startdate;

BACKUP DATABASE [NirvanaProd] TO DISK = N'R:\Backups\NirvanaProd.bak'
WITH COPY_ONLY
 ,NOFORMAT
 ,INIT
 ,NAME = N'NirvanaProd-Full Database Backup'
 ,SKIP
 ,NOREWIND
 ,NOUNLOAD
 ,COMPRESSION
 ,STATS = 10
GO

DECLARE @backupSetId AS INT

SELECT @backupSetId = position
FROM msdb.dbo.backupset
WHERE database_name = N'NirvanaProd.bak'
 AND backup_set_id = (
  SELECT max(backup_set_id)
  FROM msdb..backupset
  WHERE database_name = N'NirvanaProd.bak'
  )

IF @backupSetId IS NULL
BEGIN
 RAISERROR (
   N'Verify failed. Backup information for database ''NirvanaProd'' not found.'
   ,16
   ,1
   )
END

RESTORE VERIFYONLY
FROM DISK = N'R:\Backups\NirvanaProd.bak'
WITH FILE = @backupSetId
 ,NOUNLOAD
 ,NOREWIND
GO

DECLARE @enddate VARCHAR(100);

SET @enddate = 'backup end: ' + convert(VARCHAR(32), getdate(), 121);

PRINT @enddate;

Restore.sql

-- restore backup on target server, SQL:
USE master;

-- wait for start time
DECLARE @now DATETIME
 ,@targettime DATETIME;

-- targettime is set to 02:00:00 AM, If the backup is done sooner, it will wait till 2 AM to kick off restore
SET @targettime = substring(CONVERT(VARCHAR(39), GETDATE(), 121), 1, 10) + ' 02:00:00';
SET @now = GETDATE();

IF (@targettime > @now)
BEGIN
 WAITFOR TIME @targettime;
END;

DECLARE @startdate VARCHAR(100);
SET @startdate = 'db restore start: ' + convert(VARCHAR(32), getdate(), 121);
PRINT @startdate;

IF EXISTS (
  SELECT *
  FROM sys.databases
  WHERE NAME = 'NirvanaDev'
  )
BEGIN
 ALTER DATABASE [NirvanaDev] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
END

RESTORE DATABASE [NirvanaDev]
FROM DISK = N'R:\Remotebackups\Nirvanaprod.bak'
WITH FILE = 1
 ,MOVE N'FinancialSoftwareSystems_Data' TO N'D:\SQLData\NirvanaDev.mdf'
 ,MOVE N'FinancialSoftwareSystems_log' TO N'D:\SQLData\nirvanaDev.ldf'
 ,NOUNLOAD
 ,REPLACE
 ,STATS = 10
GO

ALTER DATABASE [NirvanaDev] SET MULTI_USER WITH ROLLBACK IMMEDIATE;
GO

DECLARE @enddate VARCHAR(100);
SET @enddate = 'db users restore end: ' + convert(VARCHAR(32), getdate(), 121);
PRINT @enddate;

I will leave out sanitize and grant permissions script because they are specific to your environment and you should build your scripts accordingly. Next up is running the a report script on target database and uploading the output as CSV to SFTP site. I chose PSFTP because of its ease of use.

run_psftp.bat

echo lcd D:\NirvanaInc > psqftp.bat
echu lpwd >> psftp.bat
echo put D:\NirvanaInc\sql_results.csv >> psftp.bat
echo quit >> psftp.bat

c:\sysutil\psftp mysftpuser@sftp.nirvanainc.com -pw myFancyPwd -b psftp.bat  -v

By now, you should have the following files on your D:\NirvanaInc folder:
backup.sql
restore.sql
sanitize.sql
grant_script.sql
run.sql
run_psftp.bat


Finally, the main batch file:

@echo off
goto :startbatch

Description
=============================================================
This batch file is used for 
1. Taking a backup of NirvanaProd database
2. Copying backup file to a Dev db server
3. Restore database on Dev Environment
4. Sanitizing Database
5. Creating user and granting permission
6. Run a script
7. Upload results to sftp
===============================================================

:startbatch

REM set variablesa

set source_server=10.10.20.21
set target_server=10.10.20.22
set source_db=NirvanaProd
set target_db=NirvanaDev

set home_dir=D:\NirvanaInc\NirvanaProd
set backup_script=%home_dir%\backup.sql
set backup_out=%home_dir%\backup_out.txt
set copy_out=%home_dir%\copy_out.txt
set restore_script=%home_dir%\restore.sql
set restore_out=%home_dir%\restore_out.txt
set sanitize_script=%home_dir%\sanitize.sql
set sanitize_out=%home_dir%\sanitize_out.txt
set grant_script=%home_dir%\grant_script.sql
set grant_out=%home_dir%\grant_out.txt
set run_script=%home_dir%\run.sql
set run_script_out=%home_dir%\sql_results.csv

set serverlogin=mylogin
set serverpass=fancypassword123


REM backup your production database
sqlcmd -S tcp:%source_server%,1433 -d master -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %backup_script% -o %backup_out%


REM copy backup from source server to target server, log is in %copy_out%
echo copy of backup file started at: %date% %time% > %copy_out%
copy /y \\%source_server%\R$\Backups\%source_db%.bak \\%target_server%\R$\RemoteBackups 1>>%copy_out% 2>&1
echo copying of the backup file completed at: %date% %time% >> %copy_out%

REM Restore backup on the target server
sqlcmd -S tcp:%target_server%,1433 -d master -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %restore_script% -o %restore_out%

REM Sanitize database and create user, grant permissions
sqlcmd -S tcp:%target_server%,1433 -d %target_db% -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %sanitize_script% -o %sanitize_out%
sqlcmd -S tcp:%target_server%,1433 -d %target_db% -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %grant_script% -o %grant_out%

REM Run sql script
sqlcmd -S tcp:%target_server%,1433 -d %target_db% -U %serverlogin% -P %serverpass% -s "," -W -h -1 -i %run_script% -o %run_script_out%

REM Upload run_sql.csv results to SFTP
call %home_dir%/run_psftp.bat 

REM send emails on completion
C:\sysutil\sendemail -t alldba@nirvanainc.com -cc ImeMyself@nirvanainc.com -u "%target_db% refresh" -m "%source_db% backup from server %source_server% is restored on %target_server% as db %target_db% . The script ran successfully and the output was uploaded to SFTP" -s 192.178.6.221:25 -f "Gandalph@NirvanaInc.com" -a %backup_out% -a %restore_out% -a %sanitize_out% -a %grant_out% 

have fun! :)

Saturday, March 8, 2014

Postgres en_US - UTF 8 Collation on Windows

I was restoring a schema today on a stage server and ran into a collation error. The source of schema backup was Postgres 9.1, CentOS and the destination was Postgresq 9.1, windows server 2008 R2.

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2125939; 1259 50334466 INDEX ix_rd_vendorlist_facts_defaultsort_en_us
pg_restore: [archiver (db)] could not execute query: ERROR: collation "public.en_US" for encoding "UTF8" does not exist

After reading a related stackoverflow thread I realized en_US locale is Linux specific and the windows equivalent is 'English_United States.1252'. I created these collations on template database and the user database I was trying to restore this schema on and the restore finally worked.

CREATE COLLATION pg_catalog."en_US" (
 LC_COLLATE = 'English_United States.1252'
 ,LC_CTYPE = 'English_United States.1252'
 );

CREATE COLLATION pg_catalog."fr_CA" (
 LC_COLLATE = 'French_Canada.1252'
 ,LC_CTYPE = 'French_Canada.1252'
 );

CREATE COLLATION pg_catalog."pt_BR" (
 LC_COLLATE = 'Portuguese_Brazil.1252'
 ,LC_CTYPE = 'Portuguese_Brazil.1252'
 );

-- add public collations
CREATE COLLATION PUBLIC."en_US" (
 LC_COLLATE = 'English_United States.1252'
 ,LC_CTYPE = 'English_United States.1252'
 );

CREATE COLLATION PUBLIC."fr_CA" (
 LC_COLLATE = 'French_Canada.1252'
 ,LC_CTYPE = 'French_Canada.1252'
 );

CREATE COLLATION PUBLIC."pt_BR" (
 LC_COLLATE = 'Portuguese_Brazil.1252'
 ,LC_CTYPE = 'Portuguese_Brazil.1252'
 );

Tuesday, October 1, 2013

Setup: Python/Flask dev environment on windows using virtualenv and pip

Chacolateyis a sudo apt-get for the windows crowd and its awesome! Chacolatey makes it incredibly easy to setup multiple dev environments for Python/flask/django which is otherwise a multi-step process (installing setuptools, pip, virtualenv etc). Here are the steps.

Install Chacolatey

Launch a powershell prompt using "Run As Admin" and set execution policy to unrestricted.

Set-ExecutionPolicy Unrestricted

And run the following command in the same window to install chacolatey

iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))

Once its installed, you'll have csint added to your path and it is the equivalent of apt-get.

Install Python

If you don't already have Python installed, run the following in a powershell/cmd

cinst Python

This will install Python 2.7 at C:\Python27. We need a few more things to setup a proper dev environment. Instead of installing flask package directly to the main Python installation, it can be separated using virtualenv. If you are developing an app on django and another using flask and would like to keep these environments separate, virtualenv is the way to go.

Install Pip

cinst pip

pip install virtualenvwrapper
pip install virtualenvwrapper-powershell

This will add virtualenv.exe to your system path and you can create multiple Python dev environments using this. Lets call our flask app FlaskApp. Lets create a directory for this app on D drive. Create a virtual env in that folder.

virtualenv FlaskApp

This will create a Python instance in FlaskApp folder. Now activate the virtual environment we just created. cd to FlaskApp/Scripts and run activate

cd FlaskApp\Scripts\
activate
#your command line should change to (FlaskApp) D:\FlaskApp\Scripts. 
pip install flask
# To come out of (FlaskApp), simply type 'deactivate'

Now you are all set. We created a virtual environment called FlaskApp and installed flask package. Lets see if it works. Create a sample app. Create a hello.py with below

#!C:\flaskapp\Scripts
-- the above line is to let the computer run python.exe from our VirtualEnv not the system wide Python
from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello World!'

if __name__ == '__main__':
    app.run()

Run

Python hello.py

It should run the web server on 127.0.0.1:5000.