Get My IP and patch AWS Security Group

My particular use case was that In my own AWS Account where I do most of the R&D I had one security group which was only for me doing SSH into EC2 instances. Way back in 2020 during pandemic season, had to go freelance for sometime while in notice period with one company and in negotiation with another one. Well this time I was mostly connected from mobile hotspot switching from JIO on Galaxy M14 to Airtel on Galaxy A54 and BSNL on second sim of M14 and this was causing my security group update a real pain.

Basically being lazy and having devops and automation since long back. Started working on an idea an the outcome was an AWS Serverless clone of what is my ip service which is named echo my ip. Check it out on github. The nodejs code and aws sam template to deploy is given over there.

Next using the standard Ubuntu terminal text editor added the following to the .bash_aliases file.

sgupdate()
{
  currentip=$(curl --silent https://{api gateway url}/Prod/ip/)
  /usr/local/bin/aws ec2 describe-security-groups --group-id $AWS_SECURITY_GROUP > /dev/shm/permissions.json
  grep CidrIp /dev/shm/permissions.json | grep -v '/0' | awk -F'"' '{print $4}' | while read cidr;
   do
     /usr/local/bin/aws ec2 revoke-security-group-ingress --group-id $AWS_SECURITY_GROUP --ip-permissions "FromPort=-1,IpProtocol=-1,IpRanges=[{CidrIp=$cidr}]"
   done   
  /usr/local/bin/aws ec2 authorize-security-group-ingress --group-id $AWS_SECURITY_GROUP --protocol "-1" --cidr "$currentip/32"
}

alias aws-permit-me='sgupdate'

I already have a .env file for every project I am handling and a cd command will check for existance of .env and source it in case it exists.

cwd(){
  cd $1
  if [ -f .env ] ; then
    . .env
  fi
}

alias cd='cwd'

The env file is of structure as follows with coresponding values after the ‘=’ ofcourse.

export AWS_DEFAULT_REGION=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SECURITY_GROUP=
export AWS_SSH_ID=
export AWS_ACCOUNT=

It’s a common problem for people working from home with dynamic IPs to manage firewall rules. Automating the process with a serverless function and a shell alias is a great way to simplify things. Sharing on github is to help others and provide back to the community.

This method provides some advantages

  • Automation: Eliminates the tedious manual process of updating security group rules.
  • Serverless: Cost-effective, as you only pay for the compute time used.
  • Shell Alias: Provides a convenient and easy-to-remember way to trigger the update.
  • GitHub Sharing: Makes the solution accessible to others.
  • Secure: Security Group Modification uses aws cli and credentials in terminal environment

Optimizing WordPress Performance with AWS, Docker and Jenkins

At Jijutm.com, I wanted to deliver a fast and reliable experience for our readers. To achieve this, I have implemented a containerized approach using Docker and Jenkins for managing this WordPress site. This article delves into the details of our setup and how it contributes to exceptional website performance.

Why Containers?

Traditional server management often involves installing software directly on the operating system. This can lead to dependency conflicts, versioning issues, and a complex environment. Docker containers provide a solution by encapsulating applications with all their dependencies into isolated units. This offers several advantages:

Consistency: Docker ensures a consistent environment regardless of the underlying operating system. This simplifies development, testing, and deployment.
Isolation: Applications running in containers are isolated from each other, preventing conflicts and improving security.
Portability: Docker containers are portable across different environments, making it easy to migrate your application between development, staging, and production.

The Containerized Architecture

This WordPress site leverages three Docker containers:

  1. Nginx: A high-performance web server that serves the content of this website efficiently.
  2. PHP-FPM: A FastCGI process manager that executes PHP code for dynamic content generation in WordPress.
  3. MariaDB: A robust and popular open-source relational database management system that stores the WordPress data and is fully compatible with MySQL.

These containers work together seamlessly to deliver a smooth user experience. Nginx acts as the front door, handling user requests and routing them to the PHP-FPM container for processing. PHP-FPM interacts with the MariaDB container to retrieve and update website data.

Leveraging Jenkins for Automation

While Docker simplifies application management, automating deployments is crucial for efficient workflow. This is where Jenkins comes in. Jenkins is an open-source automation server that we use to manage the build and deployment process for our WordPress site.

Here’s how Jenkins integrates into this workflow:

  1. Code Changes: Whenever we make changes to the WordPress codebase, we push them to a version control system like Git.
  2. Jenkins Trigger: The push to the Git repository triggers a job in Jenkins.
  3. Build Stage: Jenkins pulls the latest code, builds a new Docker image containing the updated WordPress application, and pushes it to a Docker registry.
  4. Deployment Stage: The new Docker image is deployed to our hosting environment, updating the running containers with the latest code.

This automation ensures that our website stays up-to-date with the latest changes without any manual intervention.

Hooked into WordPress Post or Page Publish.

Over and above maintaining the code using Jenkins, each content Publish action triggers another Jenkins project, which runs a sequence of commands. wget in mirror mode to convert the whole site to static HTML files. sed to rewrite the URLs from local host to realtime external domain specific. gzip to create .html.gz for each HTML files. aws cli to sync the static mirror folder with that in AWS S3 and finally apply meta headers to the files to specify the content type and content-encoding. When all the files are synced, the AWS CLI issues an invalidate request to the CloudFront distribution.

Benefits of this Approach

Improved Performance: Docker containers provide a lightweight and efficient environment, leading to faster loading times for this website.
Enhanced Scalability: I don’t need to bother about scaling this application by adding more containers to handle increased traffic, as that is handled by aws S3 and CloudFront.
Simplified Management: Docker and Jenkins automate a significant portion of the infrastructure management, freeing up time for development and content creation. With the docker and all components running in my Asus TUF A17 Laptop powered by XUbuntu the hosting charges are limited to AWS Route53, AWS S3 and AWS CloudFront only.
Reliable Deployments: Jenkins ensures consistent and reliable deployments, minimizing the risk of errors or downtime.
Well for minimal dynamic content like the download counters, AWS Serverless lambda functions are written and deployed for updating download requests into aDynamoDB table and to display the count near any downloadable content with proper markup. Along with this the comments are moved into Disqus, as it is a comment system that can be used on WordPress sites. It can replace the native WordPress comments system.

Conclusion

By leveraging Docker containers and Jenkins, I have established a robust and performant foundation for this site. This approach allows me to focus on delivering high-quality content to the readers while ensuring a smooth and fast user experience.

Additional Considerations

Security: While Docker containers enhance security, it’s essential to maintain secure practices like keeping Docker containers updated and following security best practices for each service.
Monitoring: Monitoring the health and performance of your containers is crucial. Tools like Docker Stats and Prometheus can provide valuable insights.

Hope this article provides a valuable perspective on how Docker and Jenkins can be used to optimize a WordPress website. If you have any questions, feel free to leave a comment below!

Automating Laptop Charging with AWS: A Smart Solution to Prevent Overheating

In today’s fast-paced digital world, laptops have become indispensable tools. However, excessive charging can lead to overheating, which can significantly impact performance and battery life. In this blog post, we’ll explore a smart solution that leverages AWS services to automate laptop charging, prevent overheating, and optimize battery health. I do agree that Asus does provide premium support for a subscription, but this research and excercise was to brush up my brains and learn to create on aws with some useful solution. The solution is still in concept and once I start using it in production to the full extend, the shell scripts and cloudformation template will be pushed into github handle jthoma repository code-collection/aws

Understanding the Problem:

Overcharging can cause the battery to degrade faster and generate excessive heat. Traditional manual charging methods often lead to inconsistent charging patterns, potentially harming the battery’s lifespan.

The Solution: Automating Laptop Charging with AWS

To address this issue, we’ll utilize a combination of AWS services to create a robust and efficient automated charging system:

  1. AWS IoT Core: Purpose: This service enables secure and reliable bi-directional communication between devices and the cloud.
    How it’s used: We’ll connect a smart power outlet to AWS IoT Core, allowing it to send real-time battery level data to the cloud.
    Link: https://aws.amazon.com/iot-core/
    Getting Started: Sign up for an AWS account and create an IoT Core project.
  2. AWS Lambda: Purpose: This serverless computing service allows you to run code without provisioning or managing servers.
    How it’s used: We’ll create a Lambda function triggered by IoT Core messages. This function will analyze the battery level and determine whether to charge or disconnect the power supply.
    Link: https://aws.amazon.com/lambda/
    Getting Started: Create a Lambda function and write the necessary code in your preferred language (e.g., Python, Node.js, Java).
  3. Amazon DynamoDB: Purpose: This fully managed NoSQL database service offers fast and predictable performance with seamless scalability.
    Link: https://aws.amazon.com/dynamodb/
  4. Amazon CloudWatch: Purpose: This monitoring and logging service helps you collect and analyze system and application performance metrics.
    How it’s used: We’ll use CloudWatch to log system health and generate alarms based on battery level or temperature threshold. Also it helps to monitor the performance of our Lambda functions and IoT Core devices, ensuring optimal system health.
    Link: https://aws.amazon.com/cloudwatch/
    Getting Started: Configure CloudWatch to monitor your AWS resources and set up alarms for critical events.

How it Works:

  1. Data Collection: My Ubuntu system with the help of a shell script uses aws cli to send real-time battery level data to the cloud watch logs.
  2. Data Processing: Cloud watch metric filter alarms will trigger lambda function which is set for appropriate actions.
  3. Action Execution: The Lambda function sends commands to the smart power outlet to control the charging process.
  4. Data Storage: Historical battery level data is stored in Cloud Watch logs for analysis using Athena and further optimization.
  5. Monitoring and Alerting: CloudWatch monitors the system’s health and sends alerts if any issues arise.

Benefits of Automated Charging:

Optimized Battery Health: Prevents overcharging and undercharging, extending battery life.
Reduced Heat Generation: Minimizes thermal stress on the laptop.
Improved Performance: Ensures optimal battery performance, leading to better system responsiveness.
Energy Efficiency: Reduces energy consumption by avoiding unnecessary charging.

Conclusion

By leveraging AWS services, a sophisticated automated charging system that safeguards the laptop’s battery health and enhances its overall performance is reached. This solution empowers you to take control of your device’s charging habits and enjoy a longer-lasting, cooler, and more efficient laptop.

Start Your AWS Journey Today, Signup for free !

Ready to embark on your cloud journey? Sign up for an AWS account and explore the vast possibilities of cloud computing. With AWS, you can build innovative solutions and transform your business.

Built a Feature-Rich QR Code Generator with Generative AI and JavaScript

In today’s digital world, QR codes have become ubiquitous. From restaurant menus to product packaging, these scannable squares offer a convenient way to access information. This article details the creation of a versatile QR code generator that leverages the power of generative AI and JavaScript for a seamless user experience, all within the user’s environment.

Empowering Development with Generative AI

The project began by utilizing generative AI tools to generate boilerplate code. This innovative approach demonstrates the potential of AI to streamline development processes. Prompts are used to create a foundation, allowing developers to focus on implementing advanced functionalities.

Generative AI Coding primer

Open Google Gemini and type the following

Assume the role of a HTML coding expert <enter>

Watch for the response, and if it is positive, go ahead and continue to tell it what you want. Actually for this project the next prompt I gave was:

Show me an HTML boiler plate starter with Bootstrap and JQquery linked from public cdn libraries.

Then for each element, the correct description was suggested, like adding form, text input, further reset button, submit button, and download button initially hidden. The rest of the functionality was very easy with qrcodejs library and further new chat with role setting.

Assume the role of a JavaScript programmer with hefty JQuery experience.

Further prompts were curated to get the whole builder ready still I had to use a bit of my expertise and commonsense, while local testing was done using the node js utility HTTP-server which was installed with Gemini’s suggested command.

prompt:

node http server install

from the response:

npm install http-server -g

Key Functionalities

The QR code generator boasts several user-friendly features, all processed entirely on the client-side (user’s device):

  • Phone Number Validation and WhatsApp Integration:
    • Users can input phone numbers, and the code validates them using regular expressions.
    • Validated numbers are converted into WhatsApp direct chat links, eliminating the need for external servers and simplifying communication initiation.
  • QR Code Generation for Phone Calls:
    • The application generates QR codes that trigger phone calls when scanned by a mobile camera when provided with the proper intent URL. tel://<full mobile number>
    • This is a practical solution for scenarios like displaying contact information on a car, without ever sending your phone number outside your device.

Technical Deep Dive

The project leverages the following technologies, emphasizing the client-side approach:

  • Client-Side Functionality with JavaScript:
    • This eliminates the need for a server, making the application fast, efficient, and easy to deploy. Users experience no delays while generating QR codes, and all processing stays within their browser.
  • AWS S3 Website Delivery:
    • Cost-effective and scalable hosting for the static website ensures smooth operation. S3 simply serves the application files, without any server-side processing of user data.
  • AWS CloudFront for Global Edge Caching and Free SSL:
    • CloudFront enhances performance by caching static content closer to users globally, minimizing latency. Free SSL certification guarantees secure communication between users and your website, even though no user data is transmitted.
    • Please visit review and comment on my QR Code Generator, the known bug in some mobile phones is the download fails, which I will see to as soon as possible, if that is the case with your phone take a screenshot and crop it up for the time being. On Samsung devices I think the power button and volume down pressed together would take a screenshot.

Unleashing the Power of AWS DynamoDB: Exploring Common Use Cases

Amazon Web Services (AWS) DynamoDB stands tall as a powerful, fully managed NoSQL database service, offering seamless scalability, high availability, and low latency. Its flexibility and performance make it a favorite among developers and businesses across various industries. Let’s delve into some common use cases where DynamoDB shines brightly:

1. Real-Time Analytics: DynamoDB’s ability to handle massive volumes of data with lightning-fast response times makes it ideal for real-time analytics applications. Whether it’s tracking user interactions on a website, monitoring IoT devices, or analyzing streaming data, DynamoDB efficiently stores and retrieves data, enabling businesses to make data-driven decisions in real-time.

2. Ad Tech Platforms: Ad tech platforms deal with immense amounts of data generated from ad impressions, clicks, and user interactions. DynamoDB’s ability to handle high throughput and scale automatically makes it an excellent choice for storing and retrieving this data rapidly. Additionally, its integration with other AWS services like Lambda and Kinesis enables seamless data processing and analysis pipelines.

3. Gaming Leaderboards: Gaming applications often require storing and updating leaderboards in real-time to provide players with up-to-date rankings. DynamoDB’s fast read and write capabilities, along with its scalability, make it a perfect fit for maintaining dynamic leaderboards, ensuring a smooth and engaging gaming experience for players worldwide.

4. Content Management Systems (CMS): Content-heavy applications, such as CMS platforms and blogging websites, benefit from DynamoDB’s ability to handle large volumes of structured and unstructured data. Whether it’s storing user-generated content, managing metadata, or tracking user interactions, DynamoDB provides the scalability and performance required to deliver content quickly and reliably to users.

5. E-commerce Applications: DynamoDB plays a crucial role in e-commerce applications by efficiently managing product catalogs, customer profiles, and transaction data. Its seamless scalability ensures that e-commerce platforms can handle sudden spikes in traffic during peak shopping seasons, while its low latency guarantees a smooth shopping experience for customers browsing and purchasing products online.

6. Internet of Things (IoT) Data Management: IoT devices generate vast amounts of data that need to be collected, processed, and analyzed in real-time. DynamoDB’s ability to handle high throughput and store structured and semi-structured data makes it an ideal choice for managing IoT data streams. Whether it’s monitoring sensor data, tracking device status, or analyzing telemetry data, DynamoDB provides the scalability and performance required for IoT applications.

7. User Session Management: Applications that require managing user sessions, such as chat applications and collaborative platforms, can leverage DynamoDB to store session data securely and efficiently. DynamoDB’s fast read and write operations ensure quick access to session data, enabling seamless user experiences across multiple devices and sessions.

8. Financial Services: In the financial services sector, DynamoDB is used for various applications, including fraud detection, risk management, and transaction processing. Its ability to handle high volumes of data with low latency makes it well-suited for real-time financial analytics and compliance reporting, ensuring the security and reliability of financial transactions.

In conclusion, AWS DynamoDB offers a versatile and scalable solution for a wide range of use cases across industries. Whether it’s real-time analytics, gaming leaderboards, e-commerce applications, or IoT data management, DynamoDB empowers businesses to build high-performance, scalable, and reliable applications that deliver exceptional user experiences. As technology continues to evolve, DynamoDB remains at the forefront, driving innovation and enabling businesses to thrive in the digital age.

Cloud Migration – A Thought Process

Everybody is running after this and gets stuck at one stage or the other unless their product or application is still in black and white on some notebooks or just in the wireframe and has to be built from the ground up. Now if you are going to build a new application, it can be designed to take full advantage of the cloud by combining multiple microservices, leaving out more time and resources to do the application into a perfectly usable solution. Whereas we are considering migration of existing applications into the cloud. The development language, database as well as design approach of the whole application should be considered when thinking about migration. It means that migration to the cloud should be considered on a case-to-case basis and there is no storyboard which fits all use cases.

Continue reading “Cloud Migration – A Thought Process”

Architecture in a Serverless Mindset

Consider designing a simple serverless system to process orders within an e-commerce workflow. This is an architecture for a REST micro-service that is simple to implement.

Simple Rest Architecture

How an order will be processed by this e-commerce workflow is as follows.

  1. Amazon API Gateway handles requests and responses to those API calls.
  2. Lambda contains the business logic to process the calls.
  3. Amazon DynamoDB provides persistent JSON document storage.

Though this is simple to implement, this can cause bottlenecks and failures resulting in frustrated clients at the web front end. Analyze the flow and see the possible failure points. Amazon API Gateway integrates with AWS Lambda with a synchronous invocation method and expects AWS Lambda to respond within 30 seconds. As long as this happens, all is well and good. But what if a promo gets shared over social media and very large users pile up with orders, Scaling is built into the AWS Services, but can reach the throttling limits.

The configuration of Amazon DynamoDB, where capacity specifications do play a lot. AWS Lambda throttling as well as concurrency can also create failures. Large dynamic library linking which requires initializing time also affects the cold start time and eventually the latency of AWS Lambda which could get lost with the http request timeout of Amazon API Gateway. Getting deep into the system, the business logic could have some complications and in case one request cannot be processed due to the custom code written in AWS Lambda could fail without any trace of the request saved into any persistent storage. Considering all these factors as well as suggestions by veterans in this walk of life this architecture could be further expanded to something like the below.

Revised Order Processing Architecture

What is the revision and what do the additional components provide as advantage? Let’s discuss it now.

  • Order information comes in through an API call over HTTP into Amazon API Gateway
  • AWS Lambda validates and populates the request into Amazon Simple Queue Service (SQS)
  • SQS integrates with AWS Lambda asynchronously and automatic retries for failed requests as well as Dead Letter Queues (left out in illustration) could help out
  • Business logic Processed requests could be stored to DynamoDB
  • DynamoDB Streams could trigger another AWS Lambda to intimate through SNS about the order to Customer Support

Digging more into the illustrations and explanations there are more to be done to make this a full production-ready blueprint let’s leave those thoughts to upcoming Serverless enthusiasts.

Conclusion

I strongly believe that I have been loyal to the core thoughts of being in a Serverless Mindset. Further thoughts of cost optimizing and scaling can be considered with savings plans, AWS Lambda Concurrent provisioning, Amazon DynamoDB on-demand capacity setting and making sure to optimize business logic and reduce latency.

Rearchitecting an Old Solution

It was in 2016 that a friend approached me with a requirement of a solution. They were receiving video files with high resolution into an ftp server which was maintained by the media supplier. They had created an in-house locally hosted solution to show these to the news operators to preview video files and decide where to attach them. They were starting to spread out their news operational desk to multiple cities and wanted to migrate the solution to the cloud, which they did promptly by lift and shift and the whole solution was reconfigured on a large Ec2 instance which had custom scripts to automatically check the FTP location and copy any new media to their local folders. When I was approached, they were experiencing some sluggish streaming from the hosted Ec2 instance as the media files were being accessed from different cities at the same time. Also, the full high-definition videos had to be downloaded for the preview. They wanted to optimize bandwidth utilization and improve the operator’s response times.

Continue reading “Rearchitecting an Old Solution”

php-mf in AWS Lambda running serverless

A bit outdated, though this sample implementation was committed to my git hub repository along with examples.

Nothing big to explain, rather the php-mf is a routing framework where the routes are defined in index.php or any included files. Normal PHP statement “include” can be used or the MF directive MF::addon can be used to include further routing. All these could be packaged into a lambda. This uses a couple of layers, one the php7.3 public layer, another one is the AWS PHP SDK which was built and published by me. These are available only on the ap-south-1 region as I think. So if you need on a different region, please make sure you deploy the layers correctly before attempting to deploy this module.

Publish Lambda Layers across Regions

This could even be classified as re:Inventing the wheel but was a quick hack workaround that I found at the early stages where we required a set of node.js libraries and custom modules across multiple regions, in fact, I needed this in only 5 regions. Since these libraries and modules could have frequent updates, I wanted the latest package to be updated and published as a new layer version to each region and concerned developers be notified about the new layer version as well as the changelog. Well, to make the long story short, I was at that time familiar with subversion, and the project code was committed to an svn repository. This could have biased me to use the following solution at that time period, rather now my preference would be either “Multi-Region Deployment” using cloud formation or “CodeCommit and CodePipeline with SNS” instead of S3 triggers.

The architecture is simple and was deployed manually at that time. Sorry to say that I no longer have any access to the aws account, as it is a discarded community project and the account is closed at this time. Thought about posting the idea over here since there was a discussion or rather query about this in a community forum.

Continue reading “Publish Lambda Layers across Regions”