



Every action we take today creates data — booking a cab, checking an IPL score, scanning a QR code, scrolling a reel, or refreshing an app.
And this data doesn’t arrive once a day or once an hour. It arrives every second, and in massive volumes.
While you’re reading this paragraph, companies across the world are receiving millions of events from mobile apps, websites, sensors, payment systems, and devices. And the faster this data arrives, the faster businesses are expected to react.
Think about your everyday experience:
- Your cab’s ETA updates live
- IPL scores refresh ball by ball
- UPI payments succeed or fail in milliseconds
- Food delivery apps track riders in real time
- OTT platforms recommend content as you watch
Now imagine trying to power all of this using batch processing — where data is processed only after everything has fully arrived.
It simply doesn’t work.
A cab ETA calculated 20 minutes late is useless.
A fraud detection model that runs at midnight is too late.
A “live” dashboard refreshed hourly is not live at all.
Batch processing still has its place — but today, it’s no longer enough on its own.
This is where streaming becomes essential.
Not because it’s a buzzword.
Not because everyone is talking about it.
But because modern systems demand immediate insights.
And the good news?
Streaming doesn’t have to be complex.
In this blog, we’ll break down how streaming really works, when data is actually considered streaming, and how Azure Databricks helps you process streaming data in a simple, scalable, production-ready way.
Before we talk about streaming, let’s clear a common confusion.
Batch vs streaming is not about tools.
It’s about latency — how often data arrives and how soon you process it.
Let’s use a very practical example with Azure Data Lake Storage (ADLS).
- You receive data yearly in ADLS → you process it in Databricks → batch
- You receive data monthly in ADLS → you process it in Databricks → batch
- You receive data daily or hourly in ADLS → you process it later → still batch
Even hourly data is batch, because:
- The data waits
- Processing happens after arrival
- Insights are delayed
Batch works well when:
- Latency is acceptable
- Decisions don’t need to be instant
- Data value doesn’t decay quickly
Examples:
- Financial reports
- Historical trend analysis
- Monthly KPIs
Streaming starts when latency becomes critical.
- Data arrives every few seconds or minutes
- Data is processed as it arrives
- Insights lose value if delayed
- 1–5 minutes latency → streaming
- More than ~10 minutes → starts behaving like batch again

Streaming is about continuous flow, not fixed intervals.
This is why:
- Cab ETAs update continuously
- Fraud is detected during the transaction
- Stock prices refresh instantly
Waiting even a few minutes can mean lost value.
Let’s compare the old world vs the new world:

Companies today need to:
- Adjust prices dynamically
- Monitor systems continuously
- Trigger alerts instantly
- Personalize experiences live
Batch pipelines simply can’t meet these demands alone.

Traditionally, yes.
Streaming used to mean:
- Complex infrastructure
- Multiple systems to manage
- Hard-to-debug pipelines
- Specialized skills
Databricks allows you to:
- Use the same Spark APIs
- Write simple, readable code
- Handle batch and streaming almost identically
- Scale without managing infrastructure
You don’t need to “think streaming first”.
You just need to understand the components.
At its core, every real-time pipeline has just four building blocks.
Every streaming pipeline has four simple components:
1. Producer – This is where data is generated.
Examples:
- Mobile apps
- Websites
2. Receiver – This component receives incoming events and buffers data safely
Common examples:
- Event Hubs
- Kafka
3. Optional storage – This is a storage where you can store your stream data before taking it to Databricks for processing.
Examples:
- ADLS
- S3 bucket
4. Databricks – processes data in real time
This is where streaming logic lives, transformations happen, aggregations are computed
and outputs are written.
In Databricks:
- Reads streaming data
- Processes it continuously
- Writes results to storage, dashboards, or downstream systems
In a real-time streaming pipeline, data flows in a simple, logical sequence.
First, data is generated by a producer, such as an application, website, or data generator, where events are created continuously.
These events are then sent to a receiver like Azure Event Hubs, which safely collects and buffers the incoming data at scale.
In some cases, the data may be temporarily written to optional storage such as ADLS or S3 — this is useful for durability, replay, or backup, but not mandatory.
Finally, Databricks reads the streaming data (directly from the receiver or from storage), processes it in near real time, applies transformations and aggregations, and writes the results to storage, dashboards, or downstream systems.

This clear separation of responsibilities is what makes streaming pipelines scalable, reliable, and easier to manage.
Now let’s focus on -
How to process streaming data in Databricks.
Let's assume GlobalMart has a customer-facing application that continuously generates data — orders placed, products viewed, payments attempted, delivery status updates, etc.
This data is generated every few seconds and needs to be processed in near real time.
To handle this, we’ll follow a simple, practical flow:
GlobalMart already has an application that exposes an API endpoint which sends streaming events.
When configuring this API, we select Azure as the cloud provider.

The API setup asks for four fields:
1. Endpoint connection string
2. Event Hub name
3. Email
4. Access key

At this point:
- We already have the email and access key
- The Event Hub name and endpoint connection string will be generated next
So we pause here and move to Azure.
Open the Microsoft Azure Portal and search for Event Hubs.

- Subscription: Keep default
- Resource Group:
A resource group is simply a logical container to group related Azure resources (Event Hub, storage, Databricks, etc.).
Using one resource group makes management, monitoring, and cleanup easier.
- Namespace Name: Give a meaningful name (e.g., globalmart-streaming-ns)
- Region: Select East US
- Click Review + Create → Create
This namespace will act as a container for one or more Event Hubs.

Once the namespace is created, open it and create a new Event Hub.
- Event Hub Name: e.g., globalmart-orders
- Partition Count:
Partitions allow Event Hubs to scale.
- More partitions = higher parallelism and throughput
- For learning or low-volume streams, 1 is fine
- Production systems often use multiple partitions
- Retention Settings:
- Cleanup Policy: Delete
- Retention Time: Defines how long events are stored (e.g., 1–7 days)
Retention is important because it allows:
- Replay of data
- Temporary buffering if consumers are down
Create the Event Hub.
Now we finally have the Event Hub name.

Inside the Event Hub:
- Go to Settings → Shared Access Policies
- Open RootManageSharedAccessKey
- Copy the Primary Connection String
Now go back to the API configuration and fill in:
- Endpoint connection string →
For the Connection string :
- Go to the Settings of the Namespace you have created.
- Then, Go to Shared Access policies.
- Click on the RootManageSharedAccessKey
- Paste the primary connection string


- Event Hub name → Paste the Event Hub name you created
At this point, the GlobalMart application knows where to send streaming data.
Inside the Event Hub:
- Go to Process Data
- Enable Real-Time Insights from Events
- Click Start
This opens a Query page where Azure provides a default streaming query:
SELECT *
INTO [OutputAlias]
FROM [event-hub-name] 
- This query continuously reads streaming data
- Writes it to an output destination
Create an output:
- Choose Azure Data Lake Storage (ADLS)
- Create a container to store the streaming data

- Once, the output is created
- add the output in the query as shown in the image
- Finally, Test the query to see whether it is working or not.
Next, Create a job and run it :


Once the job starts, streaming data from GlobalMart begins flowing into the ADLS container.
Note : Check for the container in your storage account before proceeding to Databricks.
Now comes Databricks.
- Open Azure Databricks
- Mount the ADLS container to your Databricks workspace
- Read the data using Structured Streaming
- Apply transformations, aggregations, and business logic
- Write results to storage, dashboards, or downstream systems
At this stage:
- Data is arriving continuously
- Databricks processes it incrementally
- Insights are generated in near real time
Streaming isn’t about complex technology — it’s about timing.
In reality, it’s a response to a simple truth:
Data loses value the longer you wait to process it.
When data arrives continuously, waiting to process it means losing its value. Batch processing still works when delays are acceptable, but modern use cases demand insights as events happen, not hours later.
With tools like Event Hubs, ADLS, and Azure Databricks, streaming becomes a practical extension of what you already know — not a replacement, but a complement.
Use batch when waiting is fine.
Use streaming when waiting is costly.
That simple shift is what makes systems truly real-time.

From Excel to Interactive Dashboard: A hands-on journey building a dynamic pricing optimizer. I started with manual calculations in Excel to prove I understood the math, then automated the analysis with a Python pricing engine, and finally created an interactive Streamlit dashboard.

A curious moment while shopping on Amazon turns into a deep dive into how Rufus, Amazon’s AI assistant, uses Generative AI, RAG, and semantic search to deliver real-time, accurate answers. This blog breaks down the architecture behind conversational commerce in a simple, story-driven way.

This blog talks about Databricks’ Unity Catalog upgrades -like Governed Tags, Automated Data Classification, and ABAC which make data governance smarter, faster, and more automated.

Tired of boring images? Meet the 'Jai & Veeru' of AI! See how combining Claude and Nano Banana Pro creates mind-blowing results for comics, diagrams, and more.

An honest, first-person account of learning dynamic pricing through hands-on Excel analysis. I tackled a real CPG problem : Should FreshJuice implement different prices for weekdays vs weekends across 30 retail stores?

What I thought would be a simple RBAC implementation turned into a comprehensive lesson in Kubernetes deployment. Part 1: Fixing three critical deployment errors. Part 2: Implementing namespace-scoped RBAC security. Real terminal outputs and lessons learned included

This blog walks you through how Databricks Connect completely transforms PySpark development workflow by letting us run Databricks-backed Spark code directly from your local IDE. From setup to debugging to best practices this Blog covers it all.

This blog unpacks how brands like Amazon and Domino’s decide who gets which coupon and why. Learn how simple RFM metrics turn raw purchase data into smart, personalised loyalty offers.

Learn how Snowflake's Query Acceleration Service provides temporary compute bursts for heavy queries without upsizing. Per-second billing, automatic scaling.

A simple ETL job broke into a 5-hour Kubernetes DNS nightmare. This blog walks through the symptoms, the chase, and the surprisingly simple fix.

A data engineer started a large cluster for a short task and couldn’t stop it due to limited permissions, leaving it idle and causing unnecessary cloud costs. This highlights the need for proper access control and auto-termination.

Say goodbye to deployment headaches. Learn how Databricks Asset Bundles keep your pipelines consistent, reproducible, and stress-free—with real-world examples and practical tips for data engineers.

Tracking sensitive data across Snowflake gets overwhelming fast. Learn how object tagging solved my data governance challenges with automated masking, instant PII discovery, and effortless scaling. From manual spreadsheets to systematic control. A practical guide for data professionals.

My first hand experience learning the essential concepts of Dynamic pricing

Running data quality checks on retail sales distribution data

This blog explores my experience with cleaning datasets during the process of performing EDA for analyzing whether geographical attributes impact sales of beverages

Snowflake recommends 100–250 MB files for optimal loading, but why? What happens when you load one large file versus splitting it into smaller chunks? I tested this with real data, and the results were surprising. Click to discover how this simple change can drastically improve loading performance.

Master the bronze layer foundation of medallion architecture with COPY INTO - the command that handles incremental ingestion and schema evolution automatically. No more duplicate data, no more broken pipelines when new columns arrive. Your complete guide to production-ready raw data ingestion

Learn Git and GitHub step by step with this complete guide. From Git basics to branching, merging, push, pull, and resolving merge conflicts—this tutorial helps beginners and developers collaborate like pros.

Discover how data management, governance, and security work together—just like your favorite food delivery app. Learn why these three pillars turn raw data into trusted insights, ensuring trust, compliance, and business growth.

Beginner’s journey in AWS Data Engineering—building a retail data pipeline with S3, Glue, and Athena. Key lessons on permissions, data lakes, and data quality. A hands-on guide for tackling real-world retail datasets.

A simple request to automate Google feedback forms turned into a technical adventure. From API roadblocks to a smart Google Apps Script pivot, discover how we built a seamless system that cut form creation time from 20 minutes to just 2.

Step-by-step journey of setting up end-to-end AKS monitoring with dashboards, alerts, workbooks, and real-world validations on Azure Kubernetes Service.

My learning experience tracing how an app works when browser is refreshed

Demonstrates the power of AI assisted development to build an end-to-end application grounds up

A hands-on learning journey of building a login and sign-up system from scratch using React, Node.js, Express, and PostgreSQL. Covers real-world challenges, backend integration, password security, and key full-stack development lessons for beginners.

This is the first in a five-part series detailing my experience implementing advanced data engineering solutions with Databricks on Google Cloud Platform. The series covers schema evolution, incremental loading, and orchestration of a robust ELT pipeline.

Discover the 7 major stages of the data engineering lifecycle, from data collection to storage and analysis. Learn the key processes, tools, and best practices that ensure a seamless and efficient data flow, supporting scalable and reliable data systems.

This blog is troubleshooting adventure which navigates networking quirks, uncovers why cluster couldn’t reach PyPI, and find the real fix—without starting from scratch.

Explore query scanning can be optimized from 9.78 MB down to just 3.95 MB using table partitioning. And how to use partitioning, how to decide the right strategy, and the impact it can have on performance and costs.

Dive deeper into query design, optimization techniques, and practical takeaways for BigQuery users.

Wondering when to use a stored procedure vs. a function in SQL? This blog simplifies the differences and helps you choose the right tool for efficient database management and optimized queries.

Discover how BigQuery Omni and BigLake break down data silos, enabling seamless multi-cloud analytics and cost-efficient insights without data movement.

In this article we'll build a motivation towards learning computer vision by solving a real world problem by hand along with assistance with chatGPT

This blog explains how Apache Airflow orchestrates tasks like a conductor leading an orchestra, ensuring smooth and efficient workflow management. Using a fun Romeo and Juliet analogy, it shows how Airflow handles timing, dependencies, and errors.

The blog underscores how snapshots and Point-in-Time Restore (PITR) are essential for data protection, offering a universal, cost-effective solution with applications in disaster recovery, testing, and compliance.

The blog contains the journey of ChatGPT, and what are the limitations of ChatGPT, due to which Langchain came into the picture to overcome the limitations and help us to create applications that can solve our real-time queries

This blog simplifies the complex world of data management by exploring two pivotal concepts: Data Lakes and Data Warehouses.

demystifying the concepts of IaaS, PaaS, and SaaS with Microsoft Azure examples

Discover how Azure Data Factory serves as the ultimate tool for data professionals, simplifying and automating data processes

Revolutionizing e-commerce with Azure Cosmos DB, enhancing data management, personalizing recommendations, real-time responsiveness, and gaining valuable insights.

Highlights the benefits and applications of various NoSQL database types, illustrating how they have revolutionized data management for modern businesses.

This blog delves into the capabilities of Calendar Events Automation using App Script.

Dive into the fundamental concepts and phases of ETL, learning how to extract valuable data, transform it into actionable insights, and load it seamlessly into your systems.

An easy to follow guide prepared based on our experience with upskilling thousands of learners in Data Literacy

Teaching a Robot to Recognize Pastries with Neural Networks and artificial intelligence (AI)

Streamlining Storage Management for E-commerce Business by exploring Flat vs. Hierarchical Systems

Figuring out how Cloud help reduce the Total Cost of Ownership of the IT infrastructure

Understand the circumstances which force organizations to start thinking about migration their business to cloud