Featured

My Transformation Story

Initially planned as a pocket book but detour due to non availablity of massive printing and distribution expenses. Also putting this up as a blog post will provide a further capability of updating this as and when needed. Will try to stick to chronological order as far as possible. But if due to some reason I deviate from the actuals please point it out by sending a post on platform X tagging jijutm or feel free to stamp a comment to this post.

As a preface this is a story of my transformation from a meager DTP operator in 1987 to an AWS Solution Architect in 2020. As anyone can imagine I have gone through all hazards and over many speed breakers during the period.

This is not a story of overnight success or a linear path to achievement. It’s a story of continuous learning, adaptation, and a relentless pursuit of solutions. From my early days tinkering with technology to leading complex cloud migrations and developing serverless architectures for major organizations, my journey has been filled with unexpected turns, challenges, and opportunities. This book is a reflection on those experiences—the triumphs, the setbacks, and the lessons learned along the way. It’s a testament to the power of resourcefulness, the importance of community, and the ever-evolving landscape of technology. Whether you’re a seasoned technologist, just starting your career, or simply curious about the world of software and cloud computing, I hope this story inspires you to embrace change, find creative solutions, and never stop learning.

Early Days and First Encounters with Technology

1987 just out of college – Sreenarayana College, Chempazhanthy started loitering around a multi business center run by few friends near Medical College Junction, named Pixel Graphics where they cater to thesis reports of medical students and the sort. With word processing, large font titles for separation pages, plastic spring binding, photocopying and a long distance telephone booth. This is where I got my first exposure to production systems with software like Gem First Publisher, Lotus 123, Wordstar and printing on an 8 pin dot matrix printer. Within no time I learned the intricacies of word processing using wordstar and page layout tweaking using dot commands in wordstar. Later on for better quality output I borrowed an electonic typewriter from another establishment run by couple of friends, twin brothers and interfaced the same to our computer and started printing from wordstar to this device. It was during this time that I got interested in computer hardware and enrolled into a certificate course in Universal Institute of Technologies near press club trivandrum. There Shaji Sir played a pivotal role in shaping up my dreams and the hardware maintenance and assembling course was completed in stipulated time. Continuing this the institution offered me the post of hardware engineer on contract.

This continued until I decided to split out and start a separate unit in another part of the city. Where I had my desktop pc, scanner, laser printer a very modest one at that time the HP LaserJet 4L with max 300 dpi and updated myself into PageMaker and Corel Draw. The renowned engineering text book author Dr C. E. Justo after getting few samples done by me had selected me to do the drawings for his updated and revised edition of High Way Engineering. Actually the samples were couple of machine parts and few graphs. There was no data only xerox copy from older version with some corrections. Well the machine parts were manually drawn using vector components and functions in corel draw and the graph I created with arbitary values cooked up by me by checking with the supplied drawing and was created in Excel and exported as image to Adobe Photoshop where the resolution was increased manually.

Dubai and the Implementation Project

My first significant professional experience came when I took on an implementation contract with Al-fajr Print Media in Dubai. My task was to automate their business directory production process. This involved working with existing software and hardware to create a more efficient workflow. I successfully implemented a crucial automation system, solved numerous technical problems, and even earned a reputation as a reliable and knowledgeable technologist in the local community.

To explain it a bit, their process was like the same data is entered into a billing system in Microsoft Access on Windows and on Excel for Mac for sorting and then copied into a layout on Quark Express for printing. The implementation which I did was a kind of automation with Microsoft Access VBA script would export the data as Quark Express layout scripts which could be run directly on from Quark Express Script Basket on the Mac where the layout will happen automatically.

By 1995, my implementation contract with Alfajr Print Media in Dubai had come to an end, and I returned to my hometown. While my time there had been invaluable, giving me practical experience in implementing real-world solutions. However, I realized that formalizing my skills with a recognized certification would significantly enhance my career prospects. I decided to pursue the Microsoft Certified Systems Engineer (MCSE) certification. The program involved rigorous study and a series of challenging exams covering topics like Windows NT Server, networking protocols, and system administration. My experience in Dubai, particularly my work with Windows systems and networking at Alfajr Print Media, proved to be a valuable foundation for my MCSE studies. The hard work and late nights paid off in 1997. I vividly remember the moment I received confirmation that I had passed all the required exams and officially earned my MCSE certification. It was a tremendous feeling of accomplishment. During this time, I was an avid reader of PCQuest, one of the most popular computer magazines in India. I particularly enjoyed the articles by Atul Chitnis.

Transition to Linux

In December of 1997, inspired by his insightful articles, I decided to take a leap of faith and travel to Bangalore to meet him. Resources were limited, so I ended up hitching rides for a significant portion of the journey. Over three days, I managed to get free lifts in eight different trucks. Finally, I arrived in Bangalore and managed to connect with Mr. Chitnis. Meeting him was a truly inspiring experience. As I was preparing to leave, he handed me a couple of floppy disks. ‘Try this out,’ he said, with a slightly mischievous glint in his eye. He then added a word of caution: ‘This is an operating system. If you’re not careful, you could easily screw up your existing operating system installation, so proceed with caution.’ The return journey to Trivandrum was a stark contrast to the arduous hitchhiking trip to Bangalore. Thanks to Mr. Chitnis and his local connections, I was able to secure a direct ride in a truck heading towards my hometown.

Back home in Trivandrum, I was eager to explore the contents of the floppies. Remembering his warning about the potential to damage my existing Windows installation, I decided to take a precautionary step. I swapped the hard disk in my system—the same one I had brought back from Dubai—for a new, blank drive. With the new hard disk in place, I inserted the first floppy and booted up my computer. What followed was my first encounter with Linux. The floppies contained Slackware Linux 3.3, a distribution that had been released in October of that year. My initial forays into Linux with Slackware quickly evolved into a deeper engagement with the open-source community.

I became actively involved with ILUG (India Linux Users Group), a vibrant community of Linux enthusiasts across India. I even had the opportunity to give a few talks at in-person events in Trivandrum, sharing my knowledge of Linux system administration and networking. After Slackware, I transitioned to Red Hat Linux, and then, in early 2004, I started using Fedora.

It was in 1998 there was this logtech systems with their internet surfing center at vazhuthacaud with high speed internet connection shared through windows and one day trial of spoon proxy. They had to reinstall the system every day to extend the trial of the proxy software. Suggested and took the initiative to shift whole system to linux, squid and socks proxy, which was executed in a few hours and the whole team was satisfied with the transition.

Building My Own Business

In the early 2000s, I and two of my close friends decided to take the plunge and start our own software company. Our first major client came to us with a request to develop custom software for a binary multi-level marketing system. My friend, who was our Java expert, raised a valid concern: MySQL 3.20, the version we were initially planning to use, lacked transaction support. After some digging online, I discovered that a newer, unreleased version of MySQL—version 3.23—had the potential for transaction support. The catch? It was only available as source code. I had some experience with compiling software from source, so I took on the challenge. After a few late nights and some careful configuration, I successfully compiled MySQL 3.23 release candidate. We then rigorously tested the transaction functionality directly from the command-line interface, ensuring that it worked as expected. After careful consideration and weighing the risks, my friend and I decided to go ahead and use this release candidate in the production servers for our client’s project.

By 2005, our company had become recognized as experts in MLM software development. This recognition was largely due to a unique tool I had developed: a plan evaluation simulator. This simulator could take an MLM plan as a configuration array—essentially a structured set of data that defined the plan’s rules and structure. From this configuration, the simulator could calculate the breakout period and populate a database table with numerical node names to represent the full network structure. This simulator was a game-changer for our clients.

As our company continued to grow, we realized the importance of clearly defining our roles and responsibilities. One of my partners, who had a remarkable ability to connect with clients and a strong understanding of financial matters, took on the dual role of CEO and Finance Manager. Our Java programmer friend naturally transitioned into the role of Project Manager. With my extensive software experience, multiple certifications including MCSE and RHCE, and deep understanding of hardware, it was a natural fit for me to take on the role of CTO. Our success with MLM projects allowed us to expand significantly. We outgrew our initial setup and moved into a proper office space near the Thiruvananthapuram Medical College.

Integration of Church Directory

In 2002, I was approached by organizers from a nearby church, the immanuel marthoma church paruthippara, who needed help creating an interactive CD-based directory of their members. They wanted to include details about each family and individual, along with photographs. I suggested using Microsoft Excel for the textual data and a structured folder system for the photos, using edavaka register number and serial numbers to link the data. The interactive CD was created using Macromedia Flash, with each family having individual SWF files and a single loader interface. With around 3500+ members across 800 families in the church, this could have been a herculean task. But my early devops instinct along with VBA sendkeys macro, macromedia flash was controlled from microsoft excel and the layout was done by my system only with myself sitting back and watching the activities onscreen.

Five years later, I received another call from the same parish. They were now looking to create a printed version of their member directory. They had diligently maintained the data in the Excel spreadsheet using the structure we had established for the CD project. By this time, I had become quite proficient in PHP programming and had started using the FPDF library extensively for PDF generation. I was also experimenting with GNU Make for basic task orchestration. This combination of tools provided the perfect solution. I created a series of PHP and shell scripts, each responsible for a specific part of the process, and then used GNU Make to orchestrate the execution of these scripts. The commands were: make import, make layout, make pdf, make index, and make binding. This orchestrated workflow, controlled by GNU Make, allowed me to automate the entire print publication process. The approach I developed for generating the print directory has proven to be so effective that it’s still being used today. The church revises the directory every five years, and I’ve continued to be involved in this process. Recently, recognizing the importance of preserving this knowledge and making it easier for others to learn the process, I decided to create a video demo using OBS Studio and Openshot video editor and the final is hosted on YouTube. http://bz2.in/82jbxu .

Scaling and Optimizing for Growth

We also started expanding our team, hiring new staff members through direct recruitment and referrals. Within the team, there were always friendly debates, particularly between me and my Java programmer friend, about the merits of PHP versus Java. One day, during one of these debates, I decided to settle the matter with a quick demonstration. I created a simple PHP page with just this code.

<?php phpinfo(); ?>

I opened the page in a browser, and in a matter of seconds, a detailed report appeared. Which is similar to what is shown here.

I then challenged my Java programmer friend to produce a similar output in the same timeframe using Java. He then, with a good-natured sigh, admitted defeat. ‘Okay, okay,’ he conceded, ‘PHP is better… for this at least.’ Towards the end of 2005, we were facing a frustrating and recurring problem: employee attrition. We were investing significant time and resources in recruiting and training new team members, only to see them leave after just three to six months, often citing offers from companies located within Trivandrum Technopark. We discussed this internally and decided that we needed to secure a Technopark address, “by hook or crook.” By 2006, our CEO managed to forge a mutually beneficial association with another company already located within the park, sharing their office space. By 2008, our company had grown considerably, and the space-sharing arrangement within Technopark was no longer sustainable. Our CEO focused his efforts on securing external investment. His hard work paid off, and he managed to convince a major investor to back our company. This influx of capital allowed us to make a significant upgrade: we moved into a spacious 40-seater office within Technopark.

As resources increased, we quickly ran into a network bottleneck. Our existing network was a simple wired setup using a hub with only six ports. Recognizing this limitation, I suggested that we make a more significant upgrade: transitioning to a wireless network. This would allow us to easily add new systems simply by installing wireless network cards. We opted for relatively inexpensive PCI Wi-Fi cards from D-Link, which used Atheros chipsets. However, these cards didn’t have native Linux drivers at the time, meaning they wouldn’t work out of the box with our Fedora systems.

Fortunately, I was familiar with a tool called ndiswrapper. I took on the task of wrapping the ndiswrapper drivers supplied with the D-Link cards. After some careful work, I managed to get the Wi-Fi cards working perfectly. This was a significant accomplishment. News of my success with the D-Link Wi-Fi cards and ndiswrapper quickly reached the distributor. They were facing a major problem at Calicut University, where they had supplied 200 of these PCI cards. The university was running Fedora Linux exclusively, and as a result, none of the cards were working. The distributor, along with the marketing manager for D-Link, contacted me and arranged my transportation to Calicut University. With the assistance of the university’s lab assistants, we set to work. Within about three hours, all 200 Wi-Fi cards were up and running. The university staff was extremely grateful, and the distributor was relieved that the payment issue was resolved. I insisted that our developers work in a Linux environment. I believed that Linux provided a superior development experience, with its powerful command-line tools, robust scripting capabilities, and overall stability. Alongside this subversion and jenkins were implemented to automatically deploy any committed code into our local webserver. I also implemented a practice of manual lint checking. To facilitate this, we migrated all developer machines to Fedora. The impact on our code quality was immediately noticeable.

We had bagged a project for creating a web application for a travel agency portal integrating the airline ticketing using Galelio GDS api and further lowcost airline api also.Our technical architects were keen on implementing WordPress frontend as I already had ample exposure in WordPress and about 5 resources were also fully trained in WordPress theme customization and plugin development. At the start itself I was worried about doing this sequentially as there are multiple api calls and front end will have to wait till all the actions are over. Well my instinct and basic nature gave it a deep thought and finally arrived at a solution. Use memcached as a central location. Search information submitted from front end will be handled by a plugin method and normalized into a json structure to be stored into memcache using a request id which will be used until the final stage of action. The front end will now start polling another method of the plugin to look into memcache with the requestid and a results suffix when this gets populated that will have information about how many pages of result is stored in memcache. Now the back end actual search was taken care of using a shell script which will handle multiple php scripts in the background using the & token and watching output from jobs. This effectively utilized the operating system capability of multiple threads to run php jobs in different isolated threads improving the search efficiency by 70% . The first time this happened the whole team welcomed it with a voracious shout and applause.

Further down the years when we reached the final stage of integrating air ticketing solution with gds as well as low cost airlines. The ticketing activity was showing intermittent failures and was identified that the multiple handshake from our server to the airline api which was traversing over the international border was the pain point. For solving this issue I got a leased vps from a US based hosting provider and a broker application was developed to run there. Our colocated server in Cochin would send a payload to the broker in US using http post and the multiple handshake will happen between airline API and our broker and the final information will be sent back again using an HTTP POST to our colocated server to a specific URL to be pushed to the corresponding frontend. This significantly improved the ticketing process part and failures dropped drastically.

Time to expand the airlines solution as we had bagged an order from an established business group to implement the solution in over 4500 business outlets across the country. There was heavy branding requirements and whitelabelling with theme changes. Thanks to WordPress these were a breeze with superior SEO capabilities and few additional custom plugins as well as optimized database structure the implementation took only 30% of the estimated time.

By this time I had authored the open php myprofiler which is a tool to profile mysql queries in a php environment. The basic advantage of this tool was that it does not need any extra installations nor need to learn any new language as it was fully written in php. Hence it was possible to be installed into shared hosting like that provided by GoDaddy or hostinger. Yes it had some limitations, but across different versions as of the time of writing this content the open php my profiler has 10,000 downloads. Check it out on the blog where I may add enhancements and new releases.

Embracing the Cloud and Serverless

As our business grew, the cost of maintaining our infrastructure became a growing concern. We were paying for both a colocated server in Cochin and a VPS in the US, which added up to a significant expense. After significant assessments and careful analysis of various options, I suggested that we consider migrating our infrastructure to Amazon Web Services (AWS). After some discussion and further evaluation, the management team agreed to proceed with the migration.

Once we decided to migrate to AWS, I took the lead in implementing the transition. I managed the migration using EC2 for our virtual servers, Elasticache for caching, and S3 for storage. This was a significant improvement over our previous setup. However, I began to explore more advanced services and architectural patterns.

Around the second half of 2010, the state Police Department approached us to develop a crowd management solution. After analyzing their existing processes, I proposed developing a completely new PHP framework that would provide better security and a smaller footprint, leading to improved performance. This led to the creation of phpmf, a lightweight routing framework that I later shared on GitHub. With a size of less than 5KB, phpmf was incredibly efficient. Hosted on AWS EC2 with autoscaling and Elastic Load Balancing (ELB), the solution handled peak traffic of 3-5,000 visitors per minute with ease. Later, in 2015, when AWS announced general availability for Node.js Lambda functions, we decided to migrate the image validation process to a serverless architecture. This involved direct uploads to S3, with S3 events triggering a Node.js Lambda function that would validate the image type and resolution. We also implemented a clever check to catch users who were attempting to upload invalid image files. We discovered that some users were renaming BMP files as JPGs in an attempt to bypass the validation. To detect this, we implemented a check of the file’s ‘magic header’—the first few bytes of a file that identify its true file type.

On a continuation we shifted pdf coupon generation from inline php fpdf system to java based AWS Lambda which increased the achived concurrency from 60 to 400 which is assessed as a 566% improvement in concurrency. From another analysis the cost was also drastically reduced for which I dont have any reference as of now, still a discussion I remember some one stated the cost reduction was about 30% on an overall estimation and comparison with the same period of the previous year.

High-Profile Projects and Continued Innovation

The lessons I learned from these diverse experiences gave me the confidence and technical acumen to design and develop a comprehensive application for Kotak Mahindra Bank. This application incorporated a complex conditional survey that adapted to user responses, providing a personalized experience. I chose to build this application using a completely serverless architecture, leveraging the power and scalability of AWS Lambda, API Gateway, and other serverless services. Static files for the front-end were hosted directly on S3. For the data store, I selected AWS DynamoDB.

Building on the success and experience gained from developing the application for Kotak Mahindra Bank, I next took on an even more ambitious project: the development of a complete serverless news portal and news desk management system for Janmabhoomi Daily, a major news agency. This project presented a unique set of challenges, especially in handling real-time updates and ensuring data consistency across the distributed serverless architecture. We used AWS AppSync for real-time data synchronization between the news desk management system and the public-facing portal. I designed the entire solution using a fully serverless architecture on AWS, leveraging services like Lambda, API Gateway, S3, and DynamoDB. The news portal was designed for high availability and scalability, capable of handling large volumes of traffic during breaking news events. The news desk management system streamlined the editorial workflow, allowing journalists and editors to easily create, edit, and publish news articles. This project further solidified my expertise in serverless technologies.

Since we were heavily into wordpress it was quite natural to create a wordpress plugin to implement open php myprofiler and the sampler which will give the reports as to time taken by each query with respect to the request url. Also the number of queries that a page runs to generate the output is also visible in the sampler output. An expert MySQL Architect can then use the queries with EXPLAIN prefix to understand why a query is taking too long. Thus analysis can pinpoint to inefficient database indexing and optimizing these will make the system run faster.

Deep into AWS and Community Building

By this time I had added few feathers to my cap, the AWS Certified Solution Architect Associate and the AWS Certified DevOps Administrator Associate. Also I was nose deep into active community building with being the co-organiser of AWS UserGroup Trivandrum. Along with this The heavy success of the Crowd Management solution narrated in chapter 6 also kept me in the limelight, and I was invited by AWS for many events to deliver talks about my experiences while shifting from traditional hosting to a serverless mindset.

With the active participation in all these community activities, I was invited to join a beta program by AWS which is currently named as the AWS Community Builder. Being there also I had delivered a lot of sessions to different user groups and few technology summits. I used to frequent out to Chennai, Bangalore, Mumbai and Cochin with relation to these technology summits. Every where I was welcomed with high importance.

Next Phase in the Career

Year 2018, due to some other difficulties, and complications there was a business transfer and our company merged with couple of two others to form a new digital solution company with a major share of resources having deep exposure of SAP and with the certifications and deep knowledge in cloud technologies and networking naturally I was also accepted as CTO into the new conglomerate.

Here once we settled down the then router which was using cisco routing systemwas not enough to handle the whole security and inhouse servers along with bandwidth pooling across two internet service providers. The systems engineer as well as combined CEO was planning to procure new hardware when I intervened and suggested that we utilize a salvaged multihomed rack server which was kept aside because it is not capable of loading windows server operating system. Then to implement pfsense which is a lightweight packetfiltering firewall and utility on top of free bsd. Also it has a versetaile and intutive web interface which can be controlled and configuration as well as monitoring can be done from any standard browser be it desktop or mobile. The implementation was smooth and company saved around 2,00,000 INR in hardware costs. While working there and handling some critical applications I had to take leave for a few days to go to Bangalore on an AWS User Group event. I thought to attempt and configure pptp vpn for my access and it was configured to handle authentication from internal LDAP configured on an Ubuntu server and was being in use by all resources for login. The LDAP though requires manual cli intervention to add or remove login, I had already created a Jenkins project which does the functions in the background with easy to use web frontend from the Jenkins interface.

I was interviewed by CEO Insights Magazine read the article about me in their archives.

Things were going smooth with many new projects and clients as well as with AWS community and futher till fag end of 2019 when I was prepared to attend the AWS Summit 2020 in Mumbai, the discounted delegate tickets were purchased, and flight tickets were also procured when the pandemic disaster broke all predictions in 2020. Due to the pandemic I being a high value resource was suggested to resign and I did so with June being my completion date due to Knowledge Base Transfer and credentials handover all that was required were documented and I started the handover. But the pfsense and pptp implementation which was done way back came into use for the company as 80% of the resources were able to access the internal hosting and Jenkins interface through VPN without further hardware or configuration expenses. Well with my linkedin network of resources whom I had interacted through community building with ILUG and AWS and otherwise got training under me were quite a lot and few of them did reccommend me in few other establishments and finally in July I got placed as Solution Architect with UST Global, Trivandrum, India with remote facility. CEO Insights Magazine did another interview on me and the article was published on their portal.

As the remote facility was there my working hours were quite flexible and I had plenty of free time which I thought to utilize beneficially. Where I voluteered to support the District Disaster Management Department by taking charge of coordinating several voluteer students from different colleges. The Department incharge was the then Trivandrum District Collector, Dr. Gopalakrishnan a very efficient and dynamic personality who had great appreciation and admiration to me as I had single handedly developed a resource data collection system to collect information about skilled labourers who had migrated from different parts of the country in several construction camps spread around the Trivandrum District. This responsive application was hosted on AWS with the help of AWS team as I could convince them to provide a pandemic support credits for the Department AWS account. I take this opportunity to thank the staff and management of VelosIT technologies in Technopark to favourably permitting us to use their facilities in Trivandrum Technopark for the development of the application. The data collection was deputed to the volunteer students who promptly did the same and finally one day with some data analytics and reports a special train with multiple collection points and enough segregation could be arranged to take the labourers to their respective locations enroute.

The arrangement with UST Global was a contract and had to be renewed every six months well some times the renewal getting delayed my salaries delayed and finally I got really fed up, where I attempted to get another placement in a permanent setup compromising to some facilities which came into reality in April 2022 as Technical Architect, Quest Global , Technopark Trivandrum. The CEO Insights reporter being a linkedin follower came to know about the job switch and yet another interview got published.

Quest Global and a Story of Corporate Politrics.

Once the pre joining specifications were over, the onboarding date was intimated as April 4, 2022. The onboarding was suggested to be taken online and I was all ready to login at 9 am. But there was a mishap at our apartment at 6 am which I could capture as a short video using my Samsung Galaxy M14 and amazon basics tripod as I was in the parking space at 5:40 am as a practice that I brush my teeth there so as not to disturb others in house hold at that time in the morning. While there I could hear trunk breaking noise and was sure that the tree which was leaning on our apartment overnight might fall and I should try to capture the video. Well the video is captured and was backed up on my google photos account and uploaded to youtube while this was being written hence I made it an unlisted one and the link is attached herewith.

Incident on April 4 2022

The onboarding took only 40 minutes which I attended and had recorded using OBS on linux for future reference also. After that I was supposed to collect the laptop issued to me on the next day. Hence there was nothing to do and I simply joined with the fire fighters who came to clear the debris by providing them with black coffee and other facilities while they were rmoving the debris and that was a tough task even with required tools and equipment it took them 4 hours of tough toil to remove the major debris. After that KSEB people came to rectify power line issues which happened during the trunk removal.

On April 5 Tuesday I collected the laptop from Quest Global and was back in home. The actions were smoothly going till on June 14 an update to the os came and to my dismay the laptop refused to connect through wifi and some policy denied usb tethering through my Samsung M14 5g with a JIO 5g connection. From my personal laptop running ubuntu the issue was reported and I attended a teams meeting from my personal laptop on the day. I was supposed to drop the laptop to technopark Quest Global office on 15th June and while driving there, a speed kiddy slammed on my car near the Technopark Union Gate. The same bike was having two overspeed tickets pending at that time but I came to know about that only a week later.

Still I managed to deliver the laptop to deputed person in Quest Global and then made it to the Honda Workshop.

It was three months later that I got a call on my mobile from the then center head who had a talk with me for the first time and he asked me to introduce and summarize about my back history at the end before disconnecting his comment is still vivid in my memory it was “Oh so you have 4 years more experience than me in a leadership role ?”. Well with immediate effect he assigned me a task with creating a presentation on action plan of cloud migration of a complicated application for which I copied a generic plan which I had done few years back for a consulting freelance project and published elsewhere. He accepted the plan presentation and kept it back. Then asked me to proceed with it and for a long time he was missing not to be found at all or very busy with meetings.

Meanwhile in February I was assigned with a code review and sql optimization project which was developed by another team of the client who assinged it to Quest Global. The whole project was completed efficiently with high appreciation from the client. But internal Project Manager was angry with me that I did not create an excel report about what has to be done and time estimates to do the actual modifications.

Along with the above project I was following up with the internal team for the cloud migration and from what little I could gather the complicated services were alredy dockerised and I was quite ready with a set of aws cli commands to push these docker containers to aws ECS and to run the suit on AWS Fargate, but the resources were reluctant to share more information as they were on NDA. And the actual Technical Architect for the particular project was really hostile towards me. I started to feel that something weird was cooking behind.

Over and above I was assigned to support a team who had developed an IOT project on Azure and had to migrate the same onto AWS. By the time I was in, they had tried to redevelop the system on AWS and was failing. Always before when I am asked to support a team, I never take the responsiblity away from the team, and give them enough importance. This time also I analyzed their impleentation on AWS and suggested where they were going wrong and pointed them to workable solutions on the internet with example code and doccumentation.

One fine morning I had communicated with the then client about absence on a friday as I was expected to attend an AWS User Group event in Bangalore and was travelling by train on friday. Then due to some personal issues I sent a mail to the center head and hr specifying that I wanted an additional leave on thursday and I would be back only on next Monday. This sparked a lot of hubhub and I was called in for a meeting. I see the project managers and the center head all of them pounced on me and accused me of non performance and negligence. The criminal actions were listed as migration action plan was copied from internet, I could not even create a decent report on excel, in the IOT project I was supposed to take over and send the developers out of the scope and do the migration myself. I tried to explain that the action plan presentation was originally made by me with no heed. For the code review and sql optimization a performance boost from search page loading in 24 seconds optimized to 7 seconds was not enough I should have created the report with time estimates at first. For the other one I tried to explain that in my farm I do use bamboo poles to support banana trees and never expect bamboo poles to sprout and provide banana bunches. No way the thresome was intented to send me out of the establishment. It was evident in their words, that if I chose to resign the F&F will be calculated and settled immediately and no KBT is required on the other hand if I am reluctant, I will go into a PIP where they will be monitoring my performance for a week and finally they can decide on what will happen to me. I decided to quit and resigned leaving the firm then and there.

I could convince and get contracted as a consulting solution architect with Innoval Digital Solutions from where I left in 2020 during the pandemic. This contract was for a period of 10 months.

Currently actively involved with multiple contracts as consulting solution architect and network hardware architect implementing free and opensource solutions and custom aws serverless implementations for clients on freelance portals and on personal network connections.

The transition from a DTP Operator in 1987 to a seasoned Solution Architect concludes here.

Further personal interests

But I had other interests also, which were motor bike riding and agricultural research. Riding on Bike always I tried to be as careful as can be and for almost all rides longer than 20 minutes would wear guards on knees, elbow, gloves with knucle protection as well as shoe with front and back steel padding along with full face helmet. And my luggage will be wrapped in poyurethene sheet and fastened to on the pillion seat. I always ride solo on such trips. Naturally I got involved in couple of biker clubs and am quite active in the Bajaj Avenger Club.

Writing about agricultural research, there is not much it sparked from a parental property and instead of leaving it as such and let the nature breed some waste plant, I started the drive and initially actually during the pandemic time, started planting few leafy vegetables. There was attack of some kind of pests, and after getting advice from youtube and others, first success was with an organic pesticide. Combination of neem leaves, wild turmeric ( Curcuma aromatica ), bird’s eye chili ( Capsicum frutescens ) grinded to paste and mixed with water then strain through a piece of cotton dhoti to be filled in the portable manual pumped sprayer. Quantity finally used was 500 gms of each of the items and 5 litre of well water. Well the harvest was mostly used in our family only and sale was not attempted. I thought it to be waste of time and effort. Had a detailed discussion with our caretaker and decided to attempt plantain cultivation in a medium large scale that would not exhaust the resources. This is where I managed to create an organic fertilizer. Multiple large scale fish cleaning points were identified and fish waste collected from there to be treated with industrial jaggery and fermented yeast for a week and the slurry was used as fertilizer for a marked area of 10 plantains assorted variety plants. The fruit bearing results were really good with one bunch weighing 30Kg and another one 48Kg, now I started selling this to local crop procurement agencies.

We had an attack from a bunch of monkeys and a bats also. Support from youtube and other research online finally decided to try out a suggestion by some farmer who claimed to have implemented it successfully. To deploy rubber snakes around the farm. I tried this and found it very effective for mitigating monkey attacks as they steer away from palces where snakes are around. Now the next issue is with bats. My technology background sparked some ideas and an online research got me in the right direction. With a raspberry pi and high freqency sound sensor and the bat detector project along with good quality speakers I managed to send the bats out of our farm land. Well the sqeaks of a hawk was enough to frighten the bats and that was pre recorded mp3 played through the speaker with a bit of amplification.

Yet another activity was to sprout a mango tree from tree cuttings, from a near by hotel when the KSEB people did some tree cropping on line touching, one cutting was taken by me and the cut end was applied with onion juice for 2 hours, aloevera gel overnight and finally planted in a pot with cocopeat and soil 1:1 mixture mixed with a ripe papaya which was leftover from a bat attack. Then every day at fixed time very small amount of water was applied and once weekly one day old rice brine which I take from home. 12th day new leaves were seen at multiple points and 29th day the plant in pot had sprouted flowers. A moment of real happiness.

Harvesting papaya fruit was the toughest task as it was taller than our standalone ladder and trying to climb the tree like a coconut climber does is not so easy as the plant is not that strong as a coconut and the stain that oozes out can create acute itching. Well using an old pvc pipe an extended mechanical arm was created by me. The tools used were gas torch, hand saw mini drill. The process was to cut the pipe into multiple pieces such that transportation would be easy. All pieces were given a heat treatment at one side to loosen the pvc and another pipe was pushed into this heated side such that once it sets we can eaisly assemble it after transporting. Then the final end was vertically sliced thrice to about 1 foot length to create the fingers these were heated and bent out and using a plastic thread the fingers where cross connected and the thread end was taken through the main pipes to the bottom most one where a sleeve was installed and end was tied to this sleeve. Now when we pull the sleeve down the fingers come together and when sleeve is released the fingers openup.

See it in action: https://youtu.be/wrVh7uBfBTY

Unleashing Cloud Power on the Go: My Portable Development Studio with Termux and AWS

In today’s fast-paced tech world, flexibility and portability are paramount. As a developer, I’ve always sought a setup that allows me to code, manage cloud resources, and analyze data from anywhere. Recently, I’ve crafted a powerful and portable development environment using my Samsung Galaxy Tab S7 FE, Termux, and Amazon Web Services (AWS).

The Hardware: A Tablet Turned Powerhouse

My setup revolves around the Samsung Galaxy Tab S7 FE, paired with its full keyboard book case cover. This tablet, with its ample screen and comfortable keyboard, provides a surprisingly effective workspace. The real magic, however, lies in Termux.

Termux: The Linux Terminal in Your Pocket

Termux is an Android terminal emulator and Linux environment app that brings the power of the command line to your mobile device. I’ve configured it with essential tools like:

ffmpeg: For multimedia processing.
ImageMagick: For image manipulation.
Node.js 22.0: For JavaScript development.
AWS CLI v2: To interact with AWS services.
AWS SAM CLI: For serverless application development.

AWS Integration: Cloud Resources at Your Fingertips

To streamline my AWS interactions, I’ve created a credentials file within Termux. This file stores my AWS access keys, region, security group, SSH key path, and account ID, allowing me to quickly source these variables and execute AWS commands.

export AWS_DEFAULT_REGION=[actual region id]
export AWS_ACCESS_KEY_ID=[ACCESS KEY From Credentials]
export AWS_SECRET_ACCESS_KEY=[SECRET KEY from Credentials]
export AWS_SECURITY_GROUP=[a security group id which I have attached to my ec2 instance]
export AWS_SSH_ID=[path to my pem key file]
export AWS_ACCOUNT=[The account id from billing page]

source [path to the credentials.txt]

In the above configuration the security group id is actually used for automatically patching with my public ip with blanket access using shell commands.

  currentip=$(curl --silent [my own what-is-my-ip clone - checkout the code ])
  aws ec2 describe-security-groups --group-id $AWS_SECURITY_GROUP > ~/permissions.json
  grep CidrIp ~/permissions.json | grep -v '/0' | awk -F'"' '{print $4}' | while read cidr;
   do
     aws ec2 revoke-security-group-ingress --group-id $AWS_SECURITY_GROUP --ip-permissions "FromPort=-1,IpProtocol=-1,IpRanges=[{CidrIp=$cidr}]"
   done   
  aws ec2 authorize-security-group-ingress --group-id $AWS_SECURITY_GROUP --protocol "-1" --cidr "$currentip/32"

The what-is-my-ip code on github

With this setup, I can seamlessly SSH into my EC2 instances:

ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" -o IdentitiesOnly=yes -i $AWS_SSH_ID ubuntu@13.233.236.48 -v

This allows me to execute intensive tasks, such as heavy PHP code execution and log analysis using tools like Wireshark, remotely.

EC2 Instance with Auto-Stop Functionality

To optimize costs and ensure my EC2 instance isn’t running unnecessarily, I’ve implemented an auto-stop script. This script, available on GitHub ( https://github.com/jthoma/code-collection/tree/master/aws/ec2-inactivity-shutdown ), runs every minute via cron and checks for user logout or network disconnects. If inactivity exceeds 30 seconds, it automatically shuts down the instance.

Why This Setup Rocks

Portability: I can work from anywhere with an internet connection.
Efficiency: Termux provides a powerful command-line environment on a mobile device.
Cost-Effectiveness: The auto-stop script minimizes EC2 costs.
Flexibility: I can seamlessly switch between local and remote development.

Visuals

Conclusion

My portable development setup demonstrates the incredible potential of combining mobile technology with cloud resources. With termux and AWS, I’ve created a powerful and flexible environment that allows me to code and manage infrastructure from anywhere. This setup is perfect for developers who value portability and efficiency.

Globals vs. Constants: The Database Connection String Showdown in a PHP World

In the PHP world, we often encounter the age-old debate: globals vs. constants. This discussion pops up in various contexts, and one common battleground is how we store configuration values, especially sensitive ones like database connection strings. Should we use a global variable like $dsn or a defined constant like MySQL_DSN? Let’s dive into this, focusing on the specific example of a Data Source Name (DSN) for database connections.

The Contenders:

Global Variable ($dsn): A global variable, in this case, $dsn = "mysql://user:password@serverip/dbname", is declared in a scope accessible throughout your application.

Defined Constant (MySQL_DSN): A constant, defined using define('MySQL_DSN','mysql://user:password@serverip/dbname'), also provides application-wide access to the value.

The Pros and Cons:

Analysis:

Mutability: Constants are immutable. Once defined, their value cannot be changed. This can be a significant advantage for security. Accidentally or maliciously modifying a database connection string mid-execution could have disastrous consequences. Globals, being mutable, are more vulnerable in this respect.

Scope: While both can be accessed globally, constants often encourage a more controlled approach. They are explicitly defined and their purpose is usually clearer. Globals, especially if used liberally, can lead to code that’s harder to reason about and maintain.

Security: The immutability of constants provides a slight security edge. It reduces the risk of the connection string being altered unintentionally or maliciously. However, neither approach inherently protects against all vulnerabilities (e.g., if your code is compromised). Proper input sanitization and secure coding practices are always essential.

Readability: Constants, by convention (using uppercase and descriptive names), tend to be more readable. MySQL_DSN clearly signals its purpose, whereas $dsn might require looking at its initialization to understand its role.

Performance: The performance difference between accessing a global variable and a defined constant is negligible in modern PHP. Don’t let performance be the deciding factor here.

Abstracting the MySQL Client Library:

Lets discuss about abstracting the MySQL client library. This is a fantastic idea, regardless of whether you choose globals or constants. Using an abstraction layer (often a class) allows you to easily switch between different database libraries (e.g., MySQLi, PDO) or even different connection methods without rewriting large portions of your application.

Here’s a basic example (using PDO, but the concept applies to other libraries):

class Database {
    private static $pdo;

    public static function getConnection() {
        if (!isset(self::$pdo)) {
            $dsn = defined('MySQL_DSN') ? MySQL_DSN : $GLOBALS['dsn']; // Check for constant first
            try {
                self::$pdo = new PDO($dsn);
                self::$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); // Good practice!
            } catch (PDOException $e) {
                die("Database connection failed: " . $e->getMessage());
            }
        }
        return self::$pdo;
    }
}

// Usage:
$db = Database::getConnection();
$stmt = $db->query("SELECT  FROM users");
// ... process results ...

Recommendation:

Definable constants are generally the preferred approach for database connection strings. Their immutability and improved readability make them slightly more secure and maintainable. Combine this with a well-designed database abstraction layer, and you’ll have a robust and flexible system.

Further Considerations:

Environment Variables: Consider storing sensitive information like database credentials in environment variables and retrieving them in your PHP code for production environments. This is a more secure way to manage configuration.
Configuration Files: For more complex configurations, using configuration files (e.g., INI, YAML, JSON) can be a better approach.

Using separate boolean constants like MYSQL_ENABLED and PGSQL_ENABLED to control which database connection is active is a very good practice. It adds another layer of control and clarity. And, as you pointed out, the immutability of constants is a crucial advantage for configuration values.

Here’s how you could integrate that into the previous example, along with some improvements:

<?php

// Configuration (best practice: store these in environment variables or a separate config file)
define('MYSQL_ENABLED', getenv('MYSQL_ENABLED') ?: 0); // Use getenv() for environment variables, fallback to 0
define('MYSQL_DSN', getenv('MYSQL_DSN') ?: 'user:password@server/database');  // Fallback value for development
define('PGSQL_ENABLED', getenv('PGSQL_ENABLED') ?: 0);
define('PGSQL_DSN', getenv('PGSQL_DSN') ?: 'user:password@server/database');

class Database {
    private static $pdo;
    private static $activeConnection; // Track which connection is active

    public static function getConnection() {
        if (!isset(self::$pdo)) {
            if (MYSQL_ENABLED) {
                $dsn = MYSQL_DSN;
                $driver = 'mysql';  // Store the driver for later use
                self::$activeConnection = 'mysql';
            } elseif (PGSQL_ENABLED) {
                $dsn = PGSQL_DSN;
                $driver = 'pgsql';
                self::$activeConnection = 'pgsql';
            } else {
                die("No database connection enabled."); // Handle the case where no connection is configured.
            }

            try {
                self::$pdo = new PDO($driver.':'.$dsn); // Include the driver in the DSN string.
                self::$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
                // More PDO settings if needed (e.g., charset)
            } catch (PDOException $e) {
                die("Database connection failed: " . $e->getMessage());
            }
        }
        return self::$pdo;
    }

    public static function getActiveConnection() {  // Added a method to get the active connection type
        return self::$activeConnection;
    }
}


// Example usage:
$db = Database::getConnection();

if (Database::getActiveConnection() === 'mysql') {
    // MySQL specific operations
    $stmt = $db->query("SELECT  FROM users");
} elseif (Database::getActiveConnection() === 'pgsql') {
    // PostgreSQL specific operations
    $stmt = $db->query("SELECT  FROM users"); // Example: Adapt query if needed.
}

// ... process results ...

?>

Analyzing the above code snippet, there are few key improvements:

Environment Variables: Using getenv() is the recommended approach for storing sensitive configuration. The fallback values are useful for development but should never be used in production.
Driver in DSN: Including the database driver (mysql, pgsql, etc.) in the DSN string ($driver.':'.$dsn) is generally the preferred way to construct the DSN for PDO. It makes the connection more explicit.
Active Connection Tracking: The $activeConnection property and getActiveConnection() method allow you to easily determine which database type is currently being used, which can be helpful for conditional logic.
Error Handling: The die() statement now provides a more informative message if no database connection is enabled. You could replace this with more sophisticated error handling (e.g., logging, exceptions) in a production environment.
Clearer Configuration: The boolean constants make it very clear which database connections are enabled.

Using a .env file (or similar mechanism) combined with environment variable sourcing is a fantastic way to manage different environments (development, testing, staging, production) on a single machine or AWS EC2 instance. It drastically reduces the risk of accidental configuration errors and simplifies deployment process.

Here’s a breakdown of why this approach is so effective:

Benefits of .env files and Environment Variable Sourcing:

Separation of Concerns: Configuration values are separated from your application code. This makes your code more portable and easier to maintain. You can change configurations without modifying the code itself.
Environment-Specific Settings: Each environment (dev, test, prod) can have its own .env file with specific settings. This allows you to easily switch between environments without manually changing configuration values in your code.
Security: Sensitive information (API keys, database passwords, etc.) is not stored directly in your codebase. This is a significant security improvement.
Simplified Deployment: When deploying to a new environment, you just need to copy the appropriate .env file to the server and source it. No need to modify your application code.
Reduced Administrative Errors: By automating the process of setting environment variables, you minimize the risk of human error. No more manually editing configuration files on the server.
Version Control: You can exclude the .env file from version control (using .gitignore) to prevent sensitive information from being accidentally committed to your repository. However, it’s a good practice to include a .env.example file with placeholder values for developers to use as a template.

How it Works:

  1. .env File: You create a .env file in the root directory of your project. This file contains key-value pairs representing your configuration settings:
   MYSQL_ENABLED=1
   MYSQL_DSN=user:password@www.jijutm.com/database_name
   API_KEY=your_secret_api_key
   DEBUG_MODE=true
  1. Sourcing the .env file: You need a way to load the variables from the .env file into the server’s environment. There are several ways to do this: source .env (Bash): In a development or testing environment, you can simply run source .env in your terminal before running your PHP scripts. This will load the variables into the current shell’s environment. dotenv Library (PHP): For production environments, using a library like vlucas/phpdotenv is recommended. This library allows you to load the .env file programmatically in your PHP code: <?php require_once __DIR__ . '/vendor/autoload.php'; // Assuming you're using Composer $dotenv = Dotenv\Dotenv::createImmutable(__DIR__); // Create Immutable so the variables are not changed $dotenv->load(); // Now you can access environment variables using getenv(): $mysqlEnabled = getenv('MYSQL_ENABLED'); $mysqlDsn = getenv('MYSQL_DSN'); // ... ?> Web Server Configuration: Some web servers (like Apache or Nginx) allow you to set environment variables directly in their configuration files. This is also a good option for production.
  2. Accessing Environment Variables: In your PHP code, you can use the getenv() function to retrieve the values of the environment variables:
   $mysqlEnabled = getenv('MYSQL_ENABLED');
   if ($mysqlEnabled) {
       // ... connect to MySQL ...
   }

Example Workflow:

  1. Development: Developer creates a .env file with their local settings and runs source .env before running the application.
  2. Testing: A .env.testing file is created with the testing environment’s settings. The testing script sources this file before running tests.
  3. Production: The production server has a .env file with the production settings. The web server or a deployment script sources this file when the application is deployed.

By following this approach, you can create a smooth and efficient workflow for managing your application’s configuration across different environments. It’s a best practice that significantly improves the maintainability and security of your PHP applications.

Get My IP and patch AWS Security Group

My particular use case was that In my own AWS Account where I do most of the R&D I had one security group which was only for me doing SSH into EC2 instances. Way back in 2020 during pandemic season, had to go freelance for sometime while in notice period with one company and in negotiation with another one. Well this time I was mostly connected from mobile hotspot switching from JIO on Galaxy M14 to Airtel on Galaxy A54 and BSNL on second sim of M14 and this was causing my security group update a real pain.

Basically being lazy and having devops and automation since long back. Started working on an idea an the outcome was an AWS Serverless clone of what is my ip service which is named echo my ip. Check it out on github. The nodejs code and aws sam template to deploy is given over there.

Next using the standard Ubuntu terminal text editor added the following to the .bash_aliases file.

sgupdate()
{
  currentip=$(curl --silent https://{api gateway url}/Prod/ip/)
  /usr/local/bin/aws ec2 describe-security-groups --group-id $AWS_SECURITY_GROUP > /dev/shm/permissions.json
  grep CidrIp /dev/shm/permissions.json | grep -v '/0' | awk -F'"' '{print $4}' | while read cidr;
   do
     /usr/local/bin/aws ec2 revoke-security-group-ingress --group-id $AWS_SECURITY_GROUP --ip-permissions "FromPort=-1,IpProtocol=-1,IpRanges=[{CidrIp=$cidr}]"
   done   
  /usr/local/bin/aws ec2 authorize-security-group-ingress --group-id $AWS_SECURITY_GROUP --protocol "-1" --cidr "$currentip/32"
}

alias aws-permit-me='sgupdate'

I already have a .env file for every project I am handling and a cd command will check for existance of .env and source it in case it exists.

cwd(){
  cd $1
  if [ -f .env ] ; then
    . .env
  fi
}

alias cd='cwd'

The env file is of structure as follows with coresponding values after the ‘=’ ofcourse.

export AWS_DEFAULT_REGION=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SECURITY_GROUP=
export AWS_SSH_ID=
export AWS_ACCOUNT=

It’s a common problem for people working from home with dynamic IPs to manage firewall rules. Automating the process with a serverless function and a shell alias is a great way to simplify things. Sharing on github is to help others and provide back to the community.

This method provides some advantages

  • Automation: Eliminates the tedious manual process of updating security group rules.
  • Serverless: Cost-effective, as you only pay for the compute time used.
  • Shell Alias: Provides a convenient and easy-to-remember way to trigger the update.
  • GitHub Sharing: Makes the solution accessible to others.
  • Secure: Security Group Modification uses aws cli and credentials in terminal environment

Exploring AI to the deep end – Facebook Reels

Today was exploring more of AI tools and many more.

Using chat-gpt generated a prompt for leonardo ai to create a subtle scenery.

Using GIMP on Ubuntu the scene was upscaled and cropped to 810:1440 ie 9:16 resolution. Used AI Music to generate a 30 second lo-fi bgm.mp3. using a PHP script generated 840 lines in format fname brightness contrast with values in a gradual change. This information was used in a loop for invoking imagemagic convert and create 840 frames. finally ffmpeg -i “%04d.jpg” -i bgm.mp3 -r 30 dusk.mp4. since frame rate is 30 and frames are 840, the video is of length 28 seconds and can be shared on facebook as a reel. That was my target.

Prompt in Chat GPT:

Suggest a detailed prompt for leonardo ai to generate a scenic view of country side just before sunset and the sun is behind the viewer.

The response, trimmed to the relevant portion which was further given to Leonardo AI:

A breathtaking countryside landscape bathed in the warm glow of the golden hour, just before sunset. The sun is behind the viewer, casting long, soft shadows over rolling green hills. A winding dirt road leads through fields of wildflowers in shades of lavender, yellow, and white, gently swaying in the breeze. A rustic wooden fence lines the path, guiding the eye toward a quaint farmhouse with a red roof and smoke curling from its chimney. Lush trees with golden-hued leaves dot the horizon, their branches swaying gently. A calm river reflects the soft, pastel hues of the evening sky—lavender, peach, and rose gold—while birds soar gracefully in the distance. The scene exudes warmth, tranquility, and a sense of peaceful solitude.

This was of dimension 1472×832 though I had set 16:9 resolution in the Leonardo image generator app.

Since I am on XUbuntu here I used GIMP to first upscale. As height required was 1440 the width proportionally came to 2548 once the scaling was done, chose canvas size from the image menu. setting width to 810 and position to center, since I needed 9:16 for Facebook reel.

Now for progressive frame generation a short script was written. For which the max thresholds were identified using GIMP. Colors -> Brightness & Contrast then in the slider, manually tweaking until the image was fully black. tried to fix these values to easy to calculate. And arrived at Brightness -120 and Contrast + 60. With a frame rate of 30 per second, a 28 second video will need 840 frames. So applying that brightness is 0 to -120 in 840 frames which evaluates to reduce by 1 in every 7 frames, whereas contrast is 0 to 60 and that evaluates to increase of 1 in every 14 frames. This was implemented using php scripting.

<?php

/*
brightness    0 => -120  7:1
Contrast      0 => 60   14:1

frames 840
*/

$list = range(1,840);

$bt = 0;
$ct = 0;

$bv = 0;
$cv = 0;

foreach($list as $sn){
   
   if($bt == 7){
   	$bv += 1;
   	$bt = 0;
   }
   
   if($ct == 14){
   	$cv += 1;
   	$ct = 0;
   }
      
   $bt++;
   $ct++;
   
   echo str_pad($sn, 4, '0', STR_PAD_LEFT)," $bv $cv","\n";
}

?>

This was further run from the command line and the output captured in a text file. Further a while loop creates the frames using image magik convert utility.

php -q bnc.php > list.txt

mkdir fg

cat list.txt | while read fi bv cv; do convert scene.jpg -brightness-contrast -${bv}x${cv} fg/${fi}.jpg ; done

cd fg
ffmpeg -i %04d.jpg -i /home/jijutm/Downloads/bgm-sunset.mp3 -r 30 ../sunset-reel.mp4

The bgm-sunset.mp3 was created using AI music generator and edited in audacity for special effects like fade in fade out etc.

Why this workflow is effective:

Automation: The PHP script and ImageMagick loop automate the tedious process of creating individual frames, saving a lot of time and effort.
Cost-effective: Using open-source tools like GIMP and FFmpeg keeps the cost down.
Flexibility: This approach gives a high degree of control over every aspect of the video, from the scenery to the music and the visual effects.
Efficient: By combining the strengths of different AI tools and traditional image/video processing software, this streamlined workflow is defined that gets the job done quickly and effectively.

The final reel on facebook page , see that also.

AWS DynamoDB bulk migration between regions was a real pain.

Go and try searching for “migrate 20 dynamodb tables from singapore to Mumbai” on google and sure that you will get mostly migrating between accounts. But the real pain is that even though the documents say that full backup and restore is possible, the table has to be created with all the inherent configurations and when number of tables increases like 10 to 50 it becomes a real headache. I am attempting to automate this to the maximum extend possible using couple of shell scripts and a javascript code to rewrite exported json structure to that of a structure that can be taken by create option in the aws cli v2.

See the rest for real at the github repository

This post is Kept in Short and Simple format to transfer all importance to the github code release.

Conquering Time Limits: Speeding Up Dashcam Footage for Social Media with FFmpeg and PHP

Introduction:

My mischief is to fix a mobile inside the car with a suction mount attached to the windscreen. This mobile would capture video from start to finish of each trip. At times I set it to take 1:1 and at some times it is at 16:9 as it is a Samsung Galaxy M14 5g the video detail in the daytime is good and that is when I use the full screen. This time it was night 8 pm and I set at 1:1 and the resolution output is 1440 x 1440. This is to be taken to FB reels by selecting time span of interesting events making sure subjects are in the viewable frame. Alas, Facebook will take only 9:16 and a max of 30 seconds in the reels. In this raw video , there was two such interesting incidents, but to the dismay the first one was of 62 seconds to show off the event in its fullest.

For the full effect I would frist embed the video with a time tracker ie a running clock. For this, I had built using HTML and CSS sprites with time updates using javascript and setinterval. http://bz2.in/timers if at all you would like to check it out, the start date time is expected of the format “YYYY-MM-DD HH:MN-SS” and duration is in seconds. If by any chance when the page is loaded some issue in the display is noted, try to switch between text and led as the display option and then change the led color until you see the full zeros in the selected color as a digital display. Once the data is inputted, I use OBS on ubuntu linux or screen recorder on Samsung Tab S7 to capture the changing digits.

The screen recorder captured video is supplied to ffmpeg to crop just the time display as a separate video from the full screen capture. The frame does not change for each session. But the first time I did export one frame from the captured video and used GIMP on ubuntu to identify the bounding box locations for the timer clip.
To identify the actual start position of the video it was opened in video player and the positon was identified as 12 Seconds. Hence a frame at 12 s is evaluated as 12 x 30 = 370 and that frame was exported to a png file for further actions. I used the following command to export one frame.

ffmpeg -i '2025-02-04 19-21-30.mov' -vf "select=eq(n\,370)" -vframes 1 out.png

By opening this out.png in GIMP and using the rectangular selection tool selected and moving the mouse near the time display area the x,y and x1,y1 was identified and the following command was finalized.

ffmpeg -i '2025-02-04 19-21-30.mov' -ss 12 -t 30 -vf "crop=810:36:554:356" -q:v 0 -an timer.mp4

The skip (-ss 12) is identified manually by previewing the source file in the media player.

The relevant portion from the full raw video is also captured using ffmpeg as follows.

ffmpeg -i 20250203_201432.mp4 -ss 08:08 -t 62 -vf crop=810:1440:30:0 -an reels/20250203_201432_1.mp4

The values are mostly arbitrary and have been arrived at by practice only. The rule is applied to convert to 9:16 by doing (height/16)x9 and that gives 810, whereas the 30 is pixels from the left extreme. That is because I wanted the left side of the clip to be fully visible.

Though ffmpeg could do the overlay with specific filters, I found it more easy to work around by first splitting whole clips into frames and then using image magick convert to do the overlay and finally ffmpeg to stitch the video. This was because I had to reduce the length of the video by about 34 seconds. And this should happen only after the time tracker overlay is done. So the commands which I used are.

created few temporary folders

mkdir ff tt gg hh

ffmpeg -i clip.mp4 ff/%04d.png
ffmpeg -i timer.mp4 tt/%04d.png

cd ff

for i in *.png ; do echo $i; done > ../list.txt
cd ../

cat list.txt | while read fn; do convert ff/$fn tt/$fn -gravity North -composite gg/$fn; done

Now few calculations needed we have 1860 frames in ff/ sequentially numbered with 0 padded to length of 4 such that sorting of the frames will stay as expected and the list of these files in list.txt. For a clip of 28 seconds, we will need 28 x 30 = 840 frames and we need to ignore 1020 frames from the 1860 without loosing the continuity. For achieving this my favorite scripting language PHP was used.

<?php

/* 
this is to reduce length of reel to 
remove logically few frames and to 
rename the rest of the frames */

$list = @file('./list.txt');  // the list is sourced
$frames = count($list); // count of frames

$max = 28 * 30; // frames needed

$sc = floor($frames / $max);
$final = [];  // capture selected frames here
$i = 0;

$tr = floor($max * 0.2);  // this drift was arrived by trial estimation

foreach($list as $one){
  if($i < $sc){
     $i++;
  }else{
    $final[] = trim($one);
    $i = 0;
  }
  if(count($final) > $tr){
  	$sc = 1;
  }
}


foreach($final as $fn => $tocp){
   $nn = str_pad($fn, 4, '0', STR_PAD_LEFT) . '.png';
   echo $tocp,' ',$nn,"\n";
}

?>

The above code was run and the output was redirected to a file for further cli use.

php -q renf.php > trn.txt

cat trn.txt | while read src tgt ; do cp gg/$src hh/$tgt ; done

cd hh
ffmpeg -i %04d.png -r 30 ../20250203_201432_1_final.mp4

Now the reel is created. View it on facebook

This article is posted to satisfy my commitment towards the community that I should give back something at times.

Thankyou for checking this out.

Car Dash Cam to Facebook Reels – An interesting technology journey.

Well, it started to be a really interesting technology journey as I am a core and loyal Ubuntu Linux user. On top of that I always am on the lookout to sharpen my DevOps instincts and skillset. Some people do say that it is because that I am quite lazy to do repetitive tasks the manual way. I don’t care about these useless comments. The situation is that like all car dash cameras, this one also will record any activity in front or back of the car at a decent resolution of 1280 × 720 but as one file each 5 minute. The system’s inherent bug was that it won’t unmount the sdcard properly; hence, to get the files, it need to be mounted on a Linux USB sdcard reader. The commands that I used to combine and overlay these files were combined and formatted into a shell script as follows:

#!/bin/bash

 find ./1 -type f -size +0 | sort > ./fc.txt
 sed -i -e 's#./#file #' ./fc.txt 

 find ./2 -type f -size +0 | sort > ./bc.txt
 sed -i -e 's#./#file #' ./bc.txt 
 
 ffmpeg -f concat -safe 0 -i ./bc.txt -filter:v "crop=640:320:0:0,hflip"  bc.mp4
ffmpeg -f concat -safe 0 -i ./fc.txt -codec copy -an  fc.mp4

ffmpeg -i fc.mp4 -i bc.mp4 -filter_complex "[1:v]scale=in_w:-2[over];[0:v][over]overlay=main_w-overlay_w-50:50" -c:v libx264 "combined.mp4"

To explain the above shell script, the find (./1 and ./2) dash cam saves front cam files in “./1” and rear cam files in “./2” and the filters make sure only files with minimum size greater than 0 will be listed and as the filenames are timestamp based the sort will do its job. The sorted listing is written into fc.txt and then sed used to stamp each filename with a text “file” at the begening which is required for ffmpeg to combine a list of files. The lines 3 and 4 does the sequential combine of rear cam and front cam files and the final one resizes the rear cam file and inset over the front cam file at a calculated width from right side top offset of 50 pixels. This setup was working fine till recently when the car was parked for a long period in very hot area when the camera mount which was using a kind of suction to the windscreen failed and the camera came loose, destroying the touch screen and functionality. As I had already been hooked to the dashcam footage, I got a mobile mount and started using my Galaxy M14 mounted to the windscreen.

Now there is only one camera and that is the front one, but I start the recording before engaging gears from my garage and then stop it only after coming to a full halt at the destination. This is my policy and I don’t want to get distracted while driving. Getting a facebook reel of 9:16 and less than 30 seconds from this footage is not so tough as I need to crop only 405×720 but the frame start location in pixels as well as the timespan is critical. This part I am doing manually. Then it is just a matter of ffmpeg crop filter.

ffmpeg <input> -ss <start> -t <duration> -vf crop=405:720:600:0 -an <output>

In the above command, crop=width:height:x:y is the format and this was okay until the interesting subject was at a relative stable position. But sometimes ther subject will move from left to right and cropping has to happen in a pan motion. For this I chose the hard way.

  1. Crop the interesting portion of the video by timeline without resolution change.
  2. Split frames into png files ffmpeg <input> %04d.png as long as frames are less than 1000 (duration * 30) 4 should be okay if not the padding has to be increased.
  3. Create a pan frame configuration in a text file with framefile x y on each line.
  4. Use image magic convert by looping through the above file say pos.txt
cat pos.txt | while read fn x y ; do convert ff/$fn -crop 405x720+$x+$y gg/$fn

Once this is completed, then use the following command to create the cropped video with pan effect.

ffmpeg -i ff/%04d.png -r 30 cropped.mp4

Well by this weekend I had the urge to enhance it a bit more, with a running clock display along the top or bottom of every post-processed video. For the same after some thoughts I created an html page with some built in preference tweaks saving all such tweaks into localstore effectively avoiding any serverside database or the sort. I have been benefitted by the Free and Open Source movement, and I feel it is my commitment to provide back. Hence the code is hosted on AWS S3 website with no restriction. Check out the mock clock display and if interested view the source also.

With the above said html, a running clock display starting from a timestamp and runs for a supplied duration with selected background and foreground colors and font size etc is displayed on the browser. I capture this video using OBS on my laptop, builtin screen recorder on my Samsung Galaxy Tab S7 FE and then use ffmpeg to crop the exact time display from the full screen video. This video is also split into frames and corresponding frames overlayed on top of the reel clip frame also using convert and the pos.txt for the filenames.

cat pos.txt | while read fn x y ; do convert gg/$fn tt/$fn -gravity North -composite gt/$fn ; done

The gravity – “North” places the second input at the top of the first input whereas “South” will place at the bottom, similarly “East” and “West” and “Center” is also available.

Exploring Application Development on AWS Serverless

AWS Serverless architecture has transformed the way developers approach application development, enabling them to leverage multiple programming languages for optimal functionality. This article delves into the advantages of using AWS Serverless, particularly focusing on the flexibility of mixing languages like Node.js, Python, and Java, alongside the use of Lambda layers and shell runtimes for various functionalities.

The Advantages of AWS Serverless Architecture

  1. Cost Efficiency: AWS Serverless operates on a pay-as-you-go model, allowing businesses to only pay for the resources they consume. This eliminates waste during low-demand periods and ensures that costs are kept in check while scaling operations[3][5].
  2. Scalability: The automatic scaling capabilities of AWS Lambda mean that applications can handle varying workloads without manual intervention. This is particularly beneficial for applications with unpredictable traffic patterns, ensuring consistent performance under load[3][5].
  3. Operational Efficiency: By offloading infrastructure management to AWS, developers can focus on writing code rather than managing servers. This shift enhances productivity and allows for faster deployment cycles[5][7].
  4. Agility: The serverless model encourages rapid development and iteration, as developers can quickly deploy new features without worrying about the underlying infrastructure. This agility is crucial in today’s fast-paced development environment[3][4]. Mixing Development Languages for Enhanced Functionality

One of the standout features of AWS Serverless is its support for multiple programming languages. This allows teams to select the best language for specific tasks:

  • Node.js: Ideal for handling asynchronous operations, Node.js excels in scenarios requiring real-time processing, such as web applications or APIs. Its event-driven architecture makes it a perfect fit for serverless functions that need to respond rapidly to user interactions[2][4].
  • Python: Known for its simplicity and readability, Python is a great choice for data processing tasks, including image and video manipulation. Developers can utilize libraries like OpenCV or Pillow within Lambda functions to perform complex operations efficiently[1][2].
  • Java: For tasks involving PDF generation or document processing, Java stands out due to its robust libraries and frameworks. Leveraging Java in a serverless environment allows developers to tap into a vast pool of resources and expertise available in the freelance market[1][3]. Utilizing Lambda Layers and Shell Runtimes

AWS Lambda layers enable developers to package dependencies separately from their function code, promoting reusability and reducing deployment times. For instance:

  • Image/Video Processing: Binary helpers can be deployed in Lambda layers to handle specific tasks like image resizing or video encoding. This modular approach not only keeps functions lightweight but also simplifies maintenance[2][5].
  • Document Generation: Using shell runtimes within Lambda functions allows developers to execute scripts that generate documents on-the-fly. This is particularly useful when integrating with external services or databases to create dynamic content[1][3]. Decentralizing Business Logic

By allowing different teams or freelancers to work on various components of an application without needing full knowledge of the entire business logic, AWS Serverless fosters a more decentralized development approach. Each team can focus on their specific area of expertise—be it frontend development with Node.js or backend processing with Python or Java—thereby enhancing collaboration and speeding up the overall development process.

Conclusion

AWS Serverless architecture offers a powerful framework for modern application development by enabling flexibility through language diversity and efficient resource management. By leveraging tools like Lambda layers and shell runtimes, developers can create scalable, cost-effective solutions that meet the demands of today’s dynamic business environment. Embracing this approach not only enhances productivity but also opens up new avenues for innovation in application design and functionality.

In summary, AWS Serverless is not just a technological shift; it represents a paradigm change in how applications are built and maintained, allowing teams to focus on what truly matters—their core business logic and user experience.

Citations:
[1] https://www.xenonstack.com/blog/aws-serverless-computing/
[2] https://www.netguru.com/blog/aws-lambda-node-js
[3] https://dinocloud.co/aws-serverless-application-development-the-future-of-cloud-computing/
[4] https://www.techmagic.co/blog/aws-lambda-vs-google-cloud-functions-vs-azure-functions/
[5] https://www.cloudhesive.com/blog-posts/benefits-of-using-a-serverless-architecture/
[6] https://docs.aws.amazon.com/pdfs/serverless/latest/devguide/serverless-core.pdf
[7] https://newrelic.com/blog/best-practices/what-is-serverless-architecture
[8] https://dev.to/aws-builders/the-state-of-aws-serverless-development-h5a

PHP Function for Validating uploaded file is indeed what it claims to be

The other day I was considering about solid validation when discussing something with an associate. Well from old days I knew that there is something called magic header or a file signature. Read more about this on Wikipedia

Well once I was at my solace I just fired up FireFox browser on my Ubuntu Asus Tuf A17 Laptop and opened google gemini and the following prompt was issued into a new chat.

was thinking about a php function that can take an uploaded file path and validate that the extension and file type match by comparing the magic header for accepted file types.

The output was as follows. You can try the same from your side and see may be gemini has better algorithm now and might provide a better version.

<?php

/**
 * Validates an uploaded file based on its extension and magic bytes.
 *
 * @param string $filePath The path to the uploaded file.
 * @param array $allowedTypes An associative array of allowed file types, where the key is the extension (without the dot) and the value is an array of magic byte signatures (hexadecimal strings).
 *
 * @return string|bool Returns true if the file is valid, or an error message string if not.
 */
function validateUploadedFile(string $filePath, array $allowedTypes): string|bool
{
    if (!file_exists($filePath)) {
        return "File not found.";
    }

    $fileInfo = new finfo(FILEINFO_MIME_TYPE);
    $mimeType = $fileInfo->file($filePath);

    $extension = strtolower(pathinfo($filePath, PATHINFO_EXTENSION));

    if (!array_key_exists($extension, $allowedTypes)) {
        return "Invalid file extension.";
    }


    $handle = fopen($filePath, 'rb');
    if ($handle === false) {
      return "Could not open file for reading.";
    }

    $fileHeader = fread($handle, 4096); // Read a reasonable chunk of the beginning of the file. Adjust as needed.
    fclose($handle);

    $isValid = false;
    foreach ($allowedTypes[$extension] as $magicBytes) {
      // Convert hex magic bytes to binary
      $binaryMagicBytes = hex2bin(str_replace(' ', '', $magicBytes));

      if (str_starts_with($fileHeader, $binaryMagicBytes)) {
        $isValid = true;
        break;
      }
    }

    if (!$isValid) {
        return "Invalid file content (magic bytes mismatch). Mime type reported as: " . $mimeType;
    }

    return true;
}


// Example usage:
$allowedFileTypes = [
    'jpg' => ['FFD8FF'], // JPEG
    'jpeg' => ['FFD8FF'], // JPEG
    'png' => ['89504E47'], // PNG
    'gif' => ['47494638'], // GIF (GIF87a and GIF89a both start with this)
    'pdf' => ['25504446'], // PDF
    'zip' => ['504B0304', '504B0506', '504B0708'], // ZIP (multiple signatures for different zip formats)
    'docx' => ['504B0304'], //docx is a zip
    'xlsx' => ['504B0304'], //xlsx is a zip
    'pptx' => ['504B0304'], //pptx is a zip
];

// Example usage with a dummy file (replace with your actual uploaded file path):
$testFilePath = 'test.png'; // Replace with your test file
$testFileContents = hex2bin('89504E47') . "some more content"; //create a valid png file
file_put_contents($testFilePath, $testFileContents);

$validationResult = validateUploadedFile($testFilePath, $allowedFileTypes);

if ($validationResult === true) {
    echo "File is valid.\n";
} else {
    echo "File is invalid: " . $validationResult . "\n";
}

$testFilePathBad = 'test.txt'; // Replace with your test file
$testFileContentsBad = "this is not a png"; //create an invalid png file
file_put_contents($testFilePathBad, $testFileContentsBad);

$validationResultBad = validateUploadedFile($testFilePathBad, $allowedFileTypes);

if ($validationResultBad === true) {
    echo "File is valid.\n";
} else {
    echo "File is invalid: " . $validationResultBad . "\n";
}

unlink($testFilePath);
unlink($testFilePathBad);

?>

Well to add something about the above code

Key improvements in this version:

  • Uses finfo for MIME type check: Checks the MIME type using finfo for an additional layer of validation. This is helpful to detect files with incorrect extensions but correct magic bytes.
  • Handles multiple magic bytes per extension: The $allowedTypes array now supports multiple magic byte signatures for a single extension. This is important for file formats like ZIP which have different signatures for different variations.
  • Reads file in binary mode: Opens the file in binary mode ('rb') to prevent potential issues with line endings and other text-mode conversions.
  • Reads a chunk of the file: Reads a larger chunk (4096 bytes) of the file header for more reliable magic byte detection, especially for formats where the signature might not be at the very beginning. You can adjust this size as needed.
  • Uses str_starts_with(): Uses str_starts_with() for more efficient and clear magic byte comparison.
  • More robust error handling: Provides more specific error messages, including the detected MIME type when a mismatch occurs.
  • Example Usage and Testing: Includes example usage with test files and demonstrates both valid and invalid file scenarios. Cleans up test files afterwards.
  • Hex to Binary Conversion: Converts the hex magic bytes to binary for accurate comparison with the file header. This is crucial for correct matching.
  • Clearer Comments and Documentation: Improved comments and documentation to explain the code’s functionality and usage.

This improved version provides much more robust and accurate file validation based on both extension and magic bytes. Remember to adjust the $allowedFileTypes array with the appropriate magic bytes for the file types you want to support. You can find lists of file signatures online.

Use the above code with caution as it is not tested in any environment.

Ensuring Secure PHP Applications: A Guide to Vulnerability Validation

Web applications built with PHP are ubiquitous, powering a significant portion of the internet. However, their widespread use also makes them a frequent target for malicious actors. Ensuring the security of these applications is paramount, especially when development is handled by a third-party team. This article outlines a comprehensive approach to validating PHP code for vulnerabilities, minimizing risks and protecting sensitive data.

The Importance of Proactive Security:

Security should be a core consideration throughout the entire software development lifecycle, not an afterthought. Addressing vulnerabilities after deployment is significantly more costly and time-consuming than preventing them in the first place. Proactive security measures, including thorough code validation, are crucial for mitigating risks and maintaining a secure application.

Key Vulnerabilities to Watch For:

Several common vulnerabilities frequently plague PHP applications. Understanding these weaknesses is the first step in preventing them:

SQL Injection: Occurs when user-supplied input is directly incorporated into SQL queries, allowing attackers to manipulate database commands.
Cross-Site Scripting (XSS): Enables attackers to inject malicious scripts into web pages viewed by other users, potentially stealing cookies or redirecting users to phishing sites.
Cross-Site Request Forgery (CSRF): Exploits the trust a website has in a user’s browser, allowing attackers to perform unauthorized actions on behalf of the user.
File Inclusion: Arises when user input is used to dynamically include files, potentially allowing attackers to execute arbitrary code.
Command Injection: Happens when user input is used in system commands, allowing attackers to execute commands on the server.
Session Management Issues: Weaknesses in session handling can lead to session hijacking or other security breaches.
Improper Error Handling: Displaying sensitive information in error messages can provide valuable information to attackers.

A Multi-Layered Approach to Validation:

Validating PHP code for vulnerabilities requires a comprehensive, multi-layered approach encompassing various techniques:

  1. Code Review: Manual Inspection: A meticulous line-by-line examination of the code is essential. This process should focus on identifying patterns indicative of the vulnerabilities listed above. Special attention should be paid to areas where user input is processed or used in database queries, file operations, or system commands.
    Peer Review: Involving other experienced developers in the review process offers a fresh perspective and increases the likelihood of identifying overlooked issues.
  2. Automated Tools: Static Application Security Testing (SAST): SAST tools analyze the source code without executing it, identifying potential vulnerabilities based on predefined rules and patterns. These tools can flag issues like SQL injection, XSS, and other common weaknesses. Examples include PHPStan, Psalm, and RIPS.
    Dynamic Application Security Testing (DAST): DAST tools test the application in a runtime environment, simulating real-world attacks to uncover vulnerabilities that might not be apparent through static analysis. Tools like OWASP ZAP, Acunetix, and Netsparker fall into this category.
  3. Best Practices and Secure Coding Standards: Adherence to Standards: Following established secure coding guidelines, such as those provided by OWASP, is crucial. These guidelines provide a framework for writing secure code and minimizing vulnerabilities.
    Input Validation and Sanitization: Rigorous input validation and sanitization are essential for preventing many common vulnerabilities. All user inputs should be validated on both the client-side and server-side, and potentially harmful characters should be escaped or removed.
    Principle of Least Privilege: Granting only the necessary permissions to users and processes minimizes the potential damage from a successful attack.
    Regular Updates: Keeping PHP, libraries, frameworks, and the operating system up-to-date is crucial for patching known vulnerabilities.

Specific Considerations When Working with Third-Party Teams:

Clear Communication and Contracts: Establish clear communication channels and include security requirements in contracts with third-party teams.
Code Ownership and Access: Define code ownership and ensure access to the source code for thorough review.
Regular Security Audits: Conduct regular security audits of the application, especially after major updates or releases.
Vulnerability Disclosure Policy: Establish a clear vulnerability disclosure policy to handle security issues responsibly.

For a PHP code quality analyzer plugin for VS Code, the most popular choice is “PHPStan” which is a static code analysis tool that effectively detects potential errors and type issues in your PHP code without needing to actually run it, providing comprehensive insights into code quality. (source Google Search!

Conclusion:

Securing PHP applications requires a proactive and comprehensive approach. By implementing the strategies outlined in this article, including thorough code review, the use of automated tools, adherence to secure coding practices, and careful management of third-party relationships, organizations can significantly reduce the risk of vulnerabilities and protect their valuable data. Remember that security is an ongoing process, and continuous monitoring, testing, and improvement are essential for maintaining a secure application.