I Replaced My Airflow DAG with 3 Lines of English
After 4 years of writing DAGs, I asked myself: what if a pipeline could understand what I want, not just what I wrote? Here's what I built.
Data Platform Engineer · AWS · CrewAI · Strands SDK · Building in Public
Architecting cloud-native financial systems. Scaling AWS payment platforms processing 13M+ records daily. Building in public.
About

Viswanath Nagarajan
Data Platform Engineer · San Antonio, TX
Data Platform Engineer specializing in cloud-native financial systems and AI-augmented engineering. Architected agentic frameworks using CrewAI/Strands SDK to reduce engineering cycles from weeks to hours. Scaling AWS payment platforms processing 13M+ records daily and building AgentFlow — an open-source agentic ETL framework.
0+
Years Experience
0+
Projects Delivered
0M+
Records Processed
Work
Click any card to expand the full case study.
Architected agentic AI framework using CrewAI and Strands SDK with multi-agent orchestration to autonomously convert legacy code.
Serverless platform unifying multi-channel payment processing into a single AWS architecture with stateful orchestration and exactly-once transaction guarantees.
Migration engine for automated financial institution onboarding and data ingestion.
Building in public
Every tool I wish existed when I needed it. Built in the open, shipped continuously.
Stop writing DAGs. Define goals.
AI-native ETL orchestration framework. Define your pipeline in plain English — agents handle orchestration, failure recovery, schema validation, and documentation. dbt meets CrewAI.
Real-time city intelligence, open data.
Live streaming analytics platform ingesting Austin Open Data 311 service requests. Kinesis → Lambda → Redshift pipeline with a React dashboard showing neighborhood trends, response-time SLAs, and anomaly detection.
Something is brewing — stay tuned.
Building in public means shipping continuously. The next tool is in the ideation phase — follow on GitHub or LinkedIn to see it take shape.
Flagship project
Stop writing DAGs. Define goals.
Why I built this
After 4 years of writing data pipelines I kept running into the same problem: DAGs describe what you wrote, not what you want. The intent lives in the README. The code drifts. They never sync.
While experimenting with CrewAI and the Strands SDK for multi-agent orchestration, I realized the same pattern — describe a goal, agents handle the how — could completely replace the way we write ETL pipelines. So I built AgentFlow: plain English in, typed pipeline steps out. No eval. No codegen. Real output.
It's early. It's rough in places. And it already cuts pipeline onboarding from hours to minutes.
⭐ Star us
GitHub Stars
MIT License
Open Source
Active Dev
Status
Roadmap
Alpha — Core engine + Airflow backend
Shipped
Self-Healing — Agent-driven failure recovery
In progress
Cloud Native — AWS Step Functions backend
GA — Prefect + Dagster backends, docs site
# AgentFlow
from agentflow import Pipeline
pipeline = Pipeline.from_goal(
"Extract customers from Postgres daily,
validate schema, load to Redshift,
alert Slack if error rate > 2%."
)
Career
Financial data pipelines with Airflow, Spark, Hadoop
Background
Degrees
M.S. in Computer Science
New York University (NYU)
Sept 2021 – May 2023
B.Tech in Computer Science & Engineering
SRM University
Aug 2016 – May 2020
Certifications
AWS Certified Cloud Practitioner (CCP)
CertifiedAWS Certified Data Engineer
In progressSkills
Hover a badge for proficiency level and context.
Platforms
Programming
AI & Automation
DevOps & Reliability
Writing
I post weekly on LinkedIn
Deep dives on AI-native data engineering, AgentFlow updates, and what I'm learning building in public. No fluff.
Follow on LinkedIn →