Feeds:
Posts
Comments

Archive for the ‘AWS’ Category

In this article, I will take you through the creation of a streamlined Continuous Deployment pipeline. This pipeline is designed to streamline the process of uploading version-controlled artifacts to a designated AWS S3 bucket, ultimately reducing the time it takes to receive feedback on code changes made to your codebase.

The Power of a Simple Shell Script: Automating AWS S3 Uploads

As a DevOps engineer, you’re often faced with the challenge of deploying code and artifacts to different environments. With the increasing popularity of cloud computing, many engineers are using AWS S3 as a cost-effective and scalable solution to store and manage their artifacts. However, uploading files to S3 can be a time-consuming process, especially when you need to test changes in lower environments.

Enter the Shell Script: a powerful tool that can help automate this process and save you time. In this blog post, we will be discussing a script that automates the uploading of artifacts to AWS S3. This script is a great example of a “poor man’s CI/CD pipeline” and can be a valuable tool for any DevOps engineer looking to streamline their deployment process.

How it Works

The script is written in bash and makes use of the AWS CLI to upload files to S3. It accepts file extensions as parameters and uploads all files with the specified extension in two local directories, dir1 and dir2, to the specified S3 bucket and location. The script also checks if the AWS CLI is installed and provides an error message if it is not.

The script also includes a feature to upload all files if “all” is provided as the only parameter. However, if “all” is provided along with other file extensions, the script will throw an error message. The script also has the capability to checkout a specified branch before uploading the files, with a default branch of “master” if no branch name is provided.

Customizability

One of the biggest advantages of this script is its customizability. By changing the values of the bucket_name and s3_location variables, you can easily specify the S3 bucket and location you want to upload your files to. Similarly, you can change the values of the dir1 and dir2 variables to specify the local directories you want to upload files from.

Time-Saving Benefits

This script saves time by automating the process of uploading files to S3. Instead of manually uploading files each time you make changes, you can simply run this script and have your files uploaded in a matter of seconds. This is especially useful when testing changes in lower environments, where you need to quickly deploy and test your code.

Cost-Effective Solution

The best part about this script is that it is free of cost and doesn’t require any proprietary software or cloud instances. All you need is a computer with the AWS CLI installed and you’re good to go. This makes it a cost-effective solution for any engineer looking to streamline their deployment process.

Conclusion

In conclusion, the script provided is a great example of a simple yet powerful tool that can help automate the process of uploading files to AWS S3. Its customizability, time-saving benefits, and cost-effectiveness make it a valuable tool for any DevOps engineer looking to streamline their deployment process. Whether you’re looking to use it as a “poor man’s CI/CD pipeline” or simply as a way to quickly test changes in lower environments, this script is a great starting point for automating your deployment process.

#!/bin/bash
# AWS S3 bucket and location
bucket_name="<bucket-name>"
s3_location="<s3-location>"

# Local files location
dir1="<directory-path-1>"
dir2="<directory-path-2>"


# Check if the AWS CLI is installed
if ! command -v aws > /dev/null 2>&1; then
  echo "AWS CLI is not installed. Please install it and try again."
  exit 1
fi

# Function to upload files to S3 based on file extensions
upload_to_s3() {
  local dir=$1
  local extension=$2
  for file in $(find $dir -name "$extension"); do
    filename=$(basename $file)
    aws s3 cp $file s3://$bucket_name/$s3_location/$filename
    if [ $? -eq 0 ]; then
      echo "Successfully uploaded $filename to S3."
    else
      echo "Failed to upload $filename to S3."
      exit 1
    fi
  done
}

# Checkout branch
echo "Checking out specified branch..."
branch_name="<branch-name>"
if [ -z "$branch_name" ]; then
  branch_name="master"
fi
git checkout $branch_name

# Main program
if [ $# -eq 0 ]; then
  echo ""
  echo "Error: At least one file extension or 'all' must be provided as a parameter."
  echo "Usage: ./script_name.sh [file_extension1] [file_extension2] ... [file_extensionN] OR ./script_name.sh all"
  echo ""
  echo "Welcome, your files are important. Try again with the correct parameters."
  exit 1
elif [ $# -eq 1 ] && [ "$1" == "all" ]; then
  # Upload all file extensions if 'all' is provided as the only parameter
  upload_to_s3 $dir1 ""
  upload_to_s3 $dir2 ""
else
  # Upload files based on file extensions provided as parameters
  for extension in "$@"; do
    if [ "$extension" == "all" ]; then
      echo "Error: 'all' cannot be used together with other file extensions."
      echo "Usage: ./script_name.sh [file_extension1] [file_extension2] ... [file_extensionN] OR ./script_name.sh all"
      exit 1
    fi
    upload_to_s3 $dir1 ".$extension"
    upload_to_s3 $dir2 ".$extension"
  done
fi

Advertisement

Read Full Post »

The benefits of running databases in the AWS are compelling but how do you get your data there? In this session, we will explore how to use the AWS Database Migration Service (DMS) to migrate on-premise SQL Server tables to DynamoDB in AWS at a very high level.

I will write up a follow-up blog post focusing on the nitty-gritty details of this migration. Until then, happy cloud surfing šŸ™‚

 

 

 

 

 

This slideshow requires JavaScript.

 

 

 

 

 

 

Read Full Post »

I know we are living in a world where phones and other devices with advanced biometric authentication have been increasingly becoming a norm. Apart from the tremendous convenience they offer, they also offer the highest level of security with no longer needing to type in a passcode and worrying about someone watching us over the shoulders. It should be the same with the databases that store our most valuable and secure information. In this article, I am going to show you how we can achieve that in a Redshift database hosted in Amazon cloud.

Commonly, Amazon Redshift users log on to the database by providing a database username and password or use a password file (.pgpass)Ā in the user’s home directory with psql queries. Both these options require you to maintain passwords somewhere which is not always the best way to do. To better manage the access as an alternative to maintaining these credentials we can configure our systems to permit users to create user credentials and log on to the database based on their IAM credentials on the go.

Amazon Redshift provides the GetClusterCredentials API action to generate temporary database user credentials. We can configure our SQL client with Amazon Redshift JDBC or ODBC drivers that manage the process of calling the GetClusterCredentials action. They do so by retrieving the database user credentials and establishing a connection between your SQL client and your Amazon Redshift database. You can also use your database application to programmatically call the GetClusterCredentials action, retrieve database user credentials, and connect to the database.

Create an IAM Role or User With Permissions to Call GetClusterCredentials

Our SQL client needs permission to call the GetClusterCredentials action on our behalf. We manage those permissions by creating an IAM role and attaching an IAM permissions policy that grants (or restricts) access to the GetClusterCredentials action and related actions.

Create an IAM user or role.

Using the IAM service, create an IAM user or role. You can also use an existing user or role. For example, if you created an IAM role for identity provider access, you can attach the necessary IAM policies to that role. I have used an existing role for my test but here is how to create a new user if you need to.

Go to IAM service in AWS Portal and click on Add user

Picture2

You can either choose Programmatic access or AWS Management Access.

Create and attach a policy to the above user

Picture3

Go to Policies and click Create Policy

Picture4

I picked ‘Create Your Own Policy’ so I can copy paste the below code. But you can let AWS create one for you if you choose ‘Policy Generator’

Picture5

Once you have the Policy Document validate it for any errors and then click ‘Create Policy’

Copy paste the below policy document into the above screen. Make sure to update the “Resource” field for your service. See naming convention for Resource ARN for Redshift here


{
    "Version": "2012-10-17",
    "Statement": [
    {
        "Sid": "Stmt1510160971000",
        "Effect": "Allow",
        "Action": [
          "redshift:GetClusterCredentials"
         ],
        "Resource": [
            "arn:aws:redshift:us-west-2:1234567890:dbuser:datag/temp_creds_user",
            "arn:aws:redshift:us-west-2:1234567890:dbname:datag/dataguser"
         ]
     }
  ]
}

 

Attach the above policy

Once you create a new policy, now attach that to the user as below. This is like providing the user with the required privileges.

Picture10

Click Add Permission

Picture9

Select the attach policy

Picture8

Click apply permission

Create a Database User and Database Groups

You can create a database user that you use to log on to the cluster database. If you create temporary user credentials for an existing user, you can disable the user’s password to force the user to log on with the temporary password. Alternatively, you can use the GetClusterCredentials Autocreate option to automatically create a new database user.

create user temp_creds_user password disable;
create group auto_login_group with user temp_creds_user;
grant all on all tables in schema public to group auto_login_group;

Picture6

Use admin password to run the above queries in SQLWorkbench

Connecting through SQL Client Tool – Configuring JDBC connection

You can configure your SQL client with an Amazon Redshift JDBC (or ODBC) driver that manages the process of creating database user credentials and establishing a connection between your SQL client and your Amazon Redshift database.

Download the latest Amazon Redshift JDBC driver from theĀ Configure a JDBC ConnectionĀ page.

Important: The Amazon Redshift JDBC driver must be version 1.2.7.1003 or later.

Create a JDBC URL with the IAM credentials options

jdbc:redshift:iam://examplecluster.abcd1234.us-west-2.redshift.amazonaws.com:5439/temp_creds_user;

In SQLWorkbench URL field use the below connection string

jdbc:redshift:iam://examplecluster.abc123xyz789.us-west-2.redshift.amazonaws.com:5439/dbname?AccessKeyID=abcd&amp;SecretAccessKey=abcde1234567890fghijkl

Add JDBC options that the JDBC driver uses to call the GetClusterCredentials API action. Don’t include these options if you call the GetClusterCredentials API action programmatically. From the below screenshot from SQLWorkbench, you will notice that the connection is successful even without providing a password.Ā 

Picture7.png

Connecting through Redshift CLI or API – Generating IAM Database Credentials

To generate database credentials you need to run the below redshift CLI command with your cluster name and the usernameĀ created above.

aws redshift get-cluster-credentials --cluster-identifier exampleCluster --db-user temp_creds_user --db-name birch --duration-seconds 3600</pre>

Below is an example output showing the database password generated on the fly that can be used for logging into redshift using PSQL commands. You can easily automate this command in bash to store the generated password in a file and supplying that file for logging in so as to eliminate the copy and paste work.

Picture8.png

Picture11

Supply the returned password using psql command to log in

Happy coding!

Read Full Post »