Feeds:
Posts
Comments

Posts Tagged ‘aws-s3’

In this article, I will take you through the creation of a streamlined Continuous Deployment pipeline. This pipeline is designed to streamline the process of uploading version-controlled artifacts to a designated AWS S3 bucket, ultimately reducing the time it takes to receive feedback on code changes made to your codebase.

The Power of a Simple Shell Script: Automating AWS S3 Uploads

As a DevOps engineer, you’re often faced with the challenge of deploying code and artifacts to different environments. With the increasing popularity of cloud computing, many engineers are using AWS S3 as a cost-effective and scalable solution to store and manage their artifacts. However, uploading files to S3 can be a time-consuming process, especially when you need to test changes in lower environments.

Enter the Shell Script: a powerful tool that can help automate this process and save you time. In this blog post, we will be discussing a script that automates the uploading of artifacts to AWS S3. This script is a great example of a “poor man’s CI/CD pipeline” and can be a valuable tool for any DevOps engineer looking to streamline their deployment process.

How it Works

The script is written in bash and makes use of the AWS CLI to upload files to S3. It accepts file extensions as parameters and uploads all files with the specified extension in two local directories, dir1 and dir2, to the specified S3 bucket and location. The script also checks if the AWS CLI is installed and provides an error message if it is not.

The script also includes a feature to upload all files if “all” is provided as the only parameter. However, if “all” is provided along with other file extensions, the script will throw an error message. The script also has the capability to checkout a specified branch before uploading the files, with a default branch of “master” if no branch name is provided.

Customizability

One of the biggest advantages of this script is its customizability. By changing the values of the bucket_name and s3_location variables, you can easily specify the S3 bucket and location you want to upload your files to. Similarly, you can change the values of the dir1 and dir2 variables to specify the local directories you want to upload files from.

Time-Saving Benefits

This script saves time by automating the process of uploading files to S3. Instead of manually uploading files each time you make changes, you can simply run this script and have your files uploaded in a matter of seconds. This is especially useful when testing changes in lower environments, where you need to quickly deploy and test your code.

Cost-Effective Solution

The best part about this script is that it is free of cost and doesn’t require any proprietary software or cloud instances. All you need is a computer with the AWS CLI installed and you’re good to go. This makes it a cost-effective solution for any engineer looking to streamline their deployment process.

Conclusion

In conclusion, the script provided is a great example of a simple yet powerful tool that can help automate the process of uploading files to AWS S3. Its customizability, time-saving benefits, and cost-effectiveness make it a valuable tool for any DevOps engineer looking to streamline their deployment process. Whether you’re looking to use it as a “poor man’s CI/CD pipeline” or simply as a way to quickly test changes in lower environments, this script is a great starting point for automating your deployment process.

#!/bin/bash
# AWS S3 bucket and location
bucket_name="<bucket-name>"
s3_location="<s3-location>"

# Local files location
dir1="<directory-path-1>"
dir2="<directory-path-2>"


# Check if the AWS CLI is installed
if ! command -v aws > /dev/null 2>&1; then
  echo "AWS CLI is not installed. Please install it and try again."
  exit 1
fi

# Function to upload files to S3 based on file extensions
upload_to_s3() {
  local dir=$1
  local extension=$2
  for file in $(find $dir -name "$extension"); do
    filename=$(basename $file)
    aws s3 cp $file s3://$bucket_name/$s3_location/$filename
    if [ $? -eq 0 ]; then
      echo "Successfully uploaded $filename to S3."
    else
      echo "Failed to upload $filename to S3."
      exit 1
    fi
  done
}

# Checkout branch
echo "Checking out specified branch..."
branch_name="<branch-name>"
if [ -z "$branch_name" ]; then
  branch_name="master"
fi
git checkout $branch_name

# Main program
if [ $# -eq 0 ]; then
  echo ""
  echo "Error: At least one file extension or 'all' must be provided as a parameter."
  echo "Usage: ./script_name.sh [file_extension1] [file_extension2] ... [file_extensionN] OR ./script_name.sh all"
  echo ""
  echo "Welcome, your files are important. Try again with the correct parameters."
  exit 1
elif [ $# -eq 1 ] && [ "$1" == "all" ]; then
  # Upload all file extensions if 'all' is provided as the only parameter
  upload_to_s3 $dir1 ""
  upload_to_s3 $dir2 ""
else
  # Upload files based on file extensions provided as parameters
  for extension in "$@"; do
    if [ "$extension" == "all" ]; then
      echo "Error: 'all' cannot be used together with other file extensions."
      echo "Usage: ./script_name.sh [file_extension1] [file_extension2] ... [file_extensionN] OR ./script_name.sh all"
      exit 1
    fi
    upload_to_s3 $dir1 ".$extension"
    upload_to_s3 $dir2 ".$extension"
  done
fi

Advertisement

Read Full Post »

The benefits of running databases in the AWS are compelling but how do you get your data there? In this session, we will explore how to use the AWS Database Migration Service (DMS) to migrate on-premise SQL Server tables to DynamoDB in AWS at a very high level.

I will write up a follow-up blog post focusing on the nitty-gritty details of this migration. Until then, happy cloud surfing 🙂

 

 

 

 

 

This slideshow requires JavaScript.

 

 

 

 

 

 

Read Full Post »