Strategically AI
Newsletters

How the Last AI Boom Stalled
PPhoto Credits: Joseph A. Giampapa
As recently as five years ago, this site was a suburban neighborhood in mock-up, to test and prove the commercial viability of self-driving vehicles. Complete with asphalt paved roads, functioning traffic lights, road signage, mock pedestrians, vehicles and obstacles, this “robocity” was pitched as a technological holy land that would show the world how a local economy can be revitalized and national transportation revolutionized. History can teach us many lessons if we want to learn from it. Every week I walk by what remains of the crown jewel of Pittsburgh’s – and the U.S.’s – technological renaissance, spearheaded by AI and capitalized on the region’s strengths: robotics, AI, non-trivial R&D, and technological entrepreneurship. We missed an opportunity and could have done better. The following are some of my thoughts for how we can do better the next time around.
Lesson 1:​
Be clear on what private equity will develop, and where it still needs help and support from public-funded R&D. The brashest and most vocal of the startups claimed that if they could not develop the technology, it would never be developed (1). While such statements helped the company attract investment interest and anchor investor belief in its unique capabilities and mission, it was far from the truth and did not help the industry. By definition, public funding usually does not go where there is already private equity. Private industry still needed help from publicly-funded R&D but instead signaled that all was under their control. When techno-entrepreneurs over-promise and under-deliver, they inadvertently kill their own industry.
​
Self-driving vehicles can only drive as fast as their autonomy pipeline – the chain of sensing, perception, planning, and actuation – can process data in order to make safe driving decisions. The faster the speed of the vehicle, the farther ahead the sensors need to look, and the more frequent the autonomy pipeline needs to cycle. In the best of conditions, the LiDAR range-finding sensors could not detect far-enough ahead of the vehicle to permit safe driving speeds above roughly 40 mph. LiDAR performance degrades significantly in precipitation, fog, and extremely dusty situations. Beam steerable radar, for example, is a possible sensing complement, but the technical maturity of such technology is still too low for practice and the self-driving industry at the time was not encouraging any R&D in such technology.
​
The other major blind spot of the hyperbolic self-driving startups was that they discounted the importance of the role of human knowledge in skilled driving. As an intuitive illustration, consider how differently you drive during the day on a road that you know versus how you drive on an unknown road with curves, at night, and in the rain. You can only drive as fast as you can safely operate the vehicle, and if you have a lot of uncertainties: the consistency of the road curvature, the likelihood of another vehicle entering your path from a side street, the possibilities of ruts and pools of water on the road; then you will necessarily drive more slowly in the latter situation. That which enables you to drive fast on familiar roads during the day is your knowledge of them and the relative lack of uncertainties. Self-driving vehicles do not have that knowledge nor know how to use it.
This is not to claim that self-driving vehicles could not possibly acquire, update, and use models of such knowledge. It is to emphasize that many times there are technical limits to what an AI system can learn, and often, the most efficient and cost-effective means of resolving those limits is to integrate information from other sources. AI systems almost always need to be designed with this principle in mind.
​
​The Department of Transportation (DOT) has been considering, for a couple of decades now, how roads with sensors, or even vehicles with sensors, can communicate with each other to improve vehicular safety on U.S. roads. There is even a spectrum of communications bandwidth that is reserved by the Federal Communications Commission (FCC) for such purposes. The hyperbolic startups missed an opportunity to collectively cooperate with the DOT to research and develop simple but effective protocols to augment on-board perception with additional knowledge of the environment that would have significantly improved safety and maybe even speeds and other performance criteria. Sure, it is a big ask to coordinate different stakeholders, but the alignment of business interests was there, there was and there still remains a need for such cooperation. The hyperbolic self-driving startups did not pursue that approach in favor of a brute-force technique of trying to solve all their problems by themselves. Some problems just cannot be solved by collecting more data and scaling compute power, alone.​
Lesson 2:​
Know the business model and motivation for your technology. This is a first lesson in any course on entrepreneurship, but the tech media and private equity were not asking the uncomfortable questions. Was there even a compelling business model for self-driving vehicles? Their biggest selling point was safety, but the most difficult safety case to prove was how well the self-driving vehicles could model other human driver behaviors and still successfully operate in their midst. Blindly stating that the vehicles will just follow the rules of the road made everyone laugh knowingly about how well that would work during congested rush hour merges, or aggressive drivers not yielding to traffic in an intersection. The hyperbolic startups argued with statistics, but the percentage of self-driving vehicles on the roads was still too small to produce any convincing numbers. Another hypothetical argument was that they can reduce the need for urban parking lots by taking their passengers to work, driving back home to self-park, and then returning to work to pick up their passengers to return home. The hypothetical counter to that proposal was that we would then have more and longer periods of traffic congestion, not to mention a greater consumption of fuel and the addition of more pollution to the environment.
​
Ultimately, the commercial viability of self-driving technology reduces to the bottom line of what it would cost producers and how much consumers would be willing to pay for it. Considering the efficiency of automobile manufacturing and the lack of efficiency in nascent self-driving technologies, it was unclear if there ever could have been a margin that was high-enough to justify the additional costs of full autonomy. Consequently, a key consideration was ignored: Are the contexts in which full autonomy can safely operate compelling-enough to justify the costs of full autonomy at its best performance?
Lesson 3:​
Know the regulatory and all other relevant contexts before developing the technology. Doing so can save time, money, and effort from pursuing the wrong goals. Better, doing so allows a company to efficiently pursue the business needs that it knows it can and will be permitted to address. The history of self-driving technologies is dotted with some fantastic ideas and realizations that simply were not permitted to operate – for whatever reason. This is not to say that regulation is bad; rather, it is a reminder for due diligence: do your homework.
The self-driving vehicle industry is not unique in the way it needs to withstand regulatory tests in order to be successful. The industry of information retrieval risked being stunted in its growth by copyright laws until the legal argument was successfully made that information retrieval indices were interpretations of copyrighted information and not the protected information itself. Life sciences are another industry in which the success of a nascent technology depends critically on the way in which it is proposed to regulators and how the business interests that it addresses align with those of major life science stakeholders.
AI is human knowledge and reasoning that is captured in usable digital form. It can be customized and tailored to address the critical regulatory and business contexts that will give a nascent technology its competitive advantage. In this respect it can be a tool for successful business enablement as well as a strategic tool to open new business markets. Rather than considering regulatory and business contexts as negatives to avoid, understanding them completely and embracing them, leveraging AI where possible, is a strategic approach to building a business and an industry that will not suffer boom and bust cycles.
Lesson 4:​
Know the limits of the technology, and assess how well it satisfies the business needs that it should address. If the technical limits fall short of satisfying the business needs, are there other, complementary ways in which they can be met? As mentioned above, the primary business need of self-driving vehicles is safety: safety to the vehicle, safety for its payload, and safety for the environment in which it is operating. Yet for at least one self-driving startup, there was a desire to experiment with the business need for speed and all the consequences that it could entail. For reasons mentioned above, the technology could not support both at the same high threshold. Either safety could be achieved at the expense of speed, or speed could be achieved at the expense of safety. The company did not investigate complementary technologies, took its risks, there was a human fatality, and now the company is no more. The consequences rippled across the industry, as well, causing the pendulum of public trust to swing to the area of mistrust, hence more regulatory oversight for the industry at large, which leads to the next lesson.
Lesson 5:​​
Individual abuses of regulatory graces negatively impact the whole industry. Regulation is not necessarily bad. Good regulation can be good for an industry as it establishes safety boundaries, manages expectations, reduces risks and creates economic opportunities. Some parts of the United States operate under the default policy that all actions are permitted as long as they are not explicitly prohibited or otherwise regulated. In those spaces, the self-driving startups were allowed to experiment and demonstrate their technologies in a space of regulatory permissiveness provided that they did so within the bounds of public trust. Violate that trust, and the whole industry pays the consequences. In this respect, techno-entrepreneur individuals should conduct themselves with awareness of how their decisions and actions have an impact on their community, industry, research and development.
​
The above are a few of the lessons that I reflect upon every time I walk past this fallow field. It makes me think of current trends, of the AI-based entrepreneurs I know who are trying to make their first significant sale, and of all the myriad stakeholders who have both aligning and orthogonal interests in AI technologies. There are lessons, analogues, and contrasts in every technological entrepreneurship story that can be found in current events and trends. I am sure that there are other entrepreneurship stories that people can add. Please comment and feel welcome to add your own.
​
1 Matt McFarland, “Uber is selling its self-driving car business to Aurora,” CNN Business, published 6:45pm EST, Monday, 7 December 2020, https://edition.cnn.com/2020/12/07/cars/uber-sells-self-driving (accessed 2025-08-17)
#AI #AutonomousVehicles #SelfDriving #LessonsLearned #JosephAndrewGiampapa #LindenAI