What inaccurate ETA on navigation apps tells us about urban mobility — a personal experiment in Delhi
A commute experiment in Delhi reveals significant error rate in peak-hour ETA predictions.
The failure isn't algorithmic; predictive models fundamentally break down when forced to navigate the unpredictability of city streets.
Consistently underestimating traffic delays artificially incentivises private car use, proving we must design better transport infrastructure
Every day, millions of people choose their mode of commute based on a number glowing on their phone screens: the estimated time of arrival (ETA) on a navigation app.
As an urban planner studying transportation networks in Indian cities, I have learnt to view ETA with skepticism. Predicting traffic is not easy, especially in a dense, fast-moving metropolis.
ETA is an illusion
Anyone who has navigated peak city traffic must know the reality: Initial ETA is often an illusion. Fifty minutes into a one-hour journey, the app often insists you're still 30 minutes away.
This is a phenomenon where sudden, cascading congestion fundamentally outpaces a navigation app's predictive models, resulting in a prediction error.
To understand exactly how and when this algorithm breaks down during rush hour, I ran a simple exercise which anyone can repeat.
The hypothesis was simple: The initial ETA provided by routing apps is systematically flawed during peak hours because it cannot accurately account for the traffic wall that builds up while the trip is already underway. It offers a static prediction for a highly dynamic, degrading network.
To test the extent of the delay, I mapped my standard 26-kilometre work commute in Delhi starting exactly at 10:00 am for a week. Instead of just looking at the final delay, I tracked the app's projected total trip time at strict 5-minute intervals from start to finish. The goal was to pinpoint the exact spatial and temporal moments the predictive algorithm failed and was forced to correct itself.
Here's what I found: While the average baseline prediction for the route was 70 minutes, in reality, the trip required 85 minutes on an average. That is a definitive 21.4 per cent failure rate in the initial prediction.
But the most revealing insight wasn't the total delay; it was how that delay accumulated.
In this graph, T=00 marks the beginning of the trip. The data was recorded using voice commands to avoid information bias. This chart represents a limited, observational snapshot of publicly available travel-time estimates, compiled solely for non-commercial public interest research regarding urban mobility and congestion behavior.
For the first hour of the trip, the algorithm is only off by a few minutes, slowly creeping up to a 77-minute projected total. However, between the 60-minute and 65-minute marks, the model completely collapsed. Despite having driven forward for five minutes, the remaining ETA actually increased. In that brief window, the total delay doubled from seven minutes to 14 minutes.
At its worst (T=70), the app was projecting a total trip time of 88 minutes. That’s a massive 18-minute deviation from the baseline.
Why navigation apps aren't to blame
Models such as Google Maps rely on a combination of historical averages (what traffic usually looks like on a typical Friday morning) and real-time probe data (the current speed of smartphones moving along the route). Using machine learning architectures like Graph Neural Networks, the app stitches these two datasets together to guess the future state of the road network.
In its developer documentation, Google defines this default traffic model as “BEST_GUESS”, noting that 'live traffic becomes more important the closer the departure time is to now.’
The flaw in this approach is not a flaw at all because the model is not at fault, the city is. The model here is merely a messenger of bad news.
ETA algorithms are forced to assume a somewhat predictable, linear degradation of traffic. They calculate vehicular movement like fluid dynamics assuming a standardised, homogeneous flow of cars moving through a pipe.
Predictive models fail because they cannot account for micro-interruptions which they cannot foresee. The algorithm expects perfect lane discipline and uniform speeds, but out on the asphalt, that mathematics shatters the moment it encounters the reality of urban streets:
• the ever-increasing dependence on personal vehicles,
• a vehicle stalling out of the blue, or the mandatory verbal exchanges over accidents
• unregulated, poor parking practices that turn a three-lane arterial into a single-lane bottleneck,
• our outdated signal designs, where one arm of a heavy traffic corridor crawls while a low-traffic arm is given equal sometimes higher green time,
• the confusing right turns, or rather a general disregard for turning lanes,
• and the constant conflicts between slow-moving traffic and high-speed automobiles, to name a few.
Urban planning solutions
Why does this matter outside of a frustrating morning inside a vehicle? Because these predictive failures have real implications for urban mobility and climate action planning.
When navigation apps consistently underestimate peak travel times by 20 per cent or more, they artificially incentivise private vehicle use. If an app tells a commuter a drive will take 70 minutes, but the local metro or bus takes 80 minutes due to poorer maneuverability, the car wins the mode-choice battle.
Further, if the reality of the drive is actually 85-90 minutes of stop-and-go gridlock, the delay is longer for public modes of transport, especially the ones which do not operate on fixed and protected right of ways, like buses and trams. These factors combined actively disincentivises shared modes of transport.
Moreover, as we transition toward electric mobility, these unpredicted, compounding delays have tangible impacts on EV battery efficiency and range anxiety, particularly for commercial and gig-economy fleets operating during peak hours.
The Google Maps API documentation warns developers that calculating these highly accurate, traffic-aware routes requires massive computational power and causes higher latency. But no amount of server processing power can compute a way out of a physical bottleneck.
To truly cure the prediction and reduce this compounding vehicular friction, we have to design streets that manage demand and prioritise efficient movement. The solutions lie in targeted urban planning interventions:
• Ultimately, the only way to permanently reduce the volume of cars causing the shockwaves is to offer a better alternative. We need to boost the connectivity and frequency of high-capacity public transport with formalised last-mile options and push for Transit-Oriented Development, bringing housing and jobs closer together.
• We must move away from outdated, static timer-based signals. Installing smart, Adaptive Traffic Control Systems that adjust green-light times based on real-time directional volume can prevent the scenario where a heavy corridor crawls while an empty arm gets a green light.
• Implementing aggressive parking pricing and strict enforcement reclaims the right-of-way for moving people, rather than storing empty metal boxes.
• Designing dedicated infrastructure, such as protected active mobility lanes, slow moving traffic lanes and exclusive bus corridors, physically separates conflicting modes, allowing all to move at their optimal speeds safely.
