Free Consultation312-605-8800

Free Consultation312-605-8800

Spring 2026 Winner of the Path to Law Scholarship

Jacquelyn Scarbary

Tomasik Kotin Kasserman, LLC proudly announces Jacquelyn Scarbary, a law student at Purdue Global Law School, as the winner of its Path to Law Scholarship. In her essay, Jacquelyn argues manufacturers should bear primary liability when autonomous systems control driving decisions and cause harm.

Jacquelyn Scarbary

Read Their Essay Here:

The rapid development of self-driving cars has raised difficult questions about legal responsibility when accidents occur. Traditional tort law assumes that a human driver is primarily responsible for operating a vehicle safely. However, as vehicles increasingly rely on complex software, sensors, and automated decision-making, it becomes less clear who should be held liable for injuries in accidents involving self-driving cars. In this evolving landscape, liability must be shared, but manufacturers should bear substantial responsibility because they design, program, and market the automated systems that control the vehicle’s behavior.

Historically, automobile accidents have been governed by negligence principles that focus on whether a reasonable driver exercised appropriate care under the circumstances. When the “driver” is an automated system, the idea of a reasonable person no longer fits neatly. Instead, the central questions become: Did the manufacturer design the system safely? Were the warnings and instructions adequate? Were software updates timely and effective? These questions align more closely with product liability law than traditional negligence by an individual driver. Under product liability doctrines, manufacturers can be held responsible when a defect in design, manufacturing, or warnings causes injury (American Bar Association, 2020).

Self-driving cars are essentially rolling computers, and their performance depends heavily on programming, algorithms, and data. The average consumer has no realistic way to evaluate, modify, or fully understand these systems. This asymmetry supports shifting more responsibility to manufacturers, who control the technology and are in the best position to prevent harm. When an automated driving system fails to recognize a pedestrian, misinterprets road markings, or does not react appropriately to hazards, it is usually not because the “user” misused the product. Instead, it is because the system was not designed or tested adequately for real-world conditions. In such cases, assigning liability to the manufacturer encourages safer design, more robust testing, and continuous improvement.

That said, there are situations where other parties may also share liability. Human occupants may still be responsible if they misuse the technology—for example, by disabling safety features, ignoring clear warnings, or using the system outside its intended operating conditions. In addition, third-party software providers or maintenance companies could be partly liable if their actions introduce defects or vulnerabilities. A reasonable approach is a hybrid model that evaluates the specific cause of the accident and allocates responsibility among manufacturers, human users, and possibly other entities. Still, the default assumption should be that when the vehicle is operating in autonomous mode as intended, the manufacturer’s system is “driving,” and liability should start there.

Legal scholars and policymakers have already begun to wrestle with these issues. The RAND Corporation has noted that as automation increases, it will become more appropriate to shift liability toward manufacturers and developers of automated driving systems, particularly in higher levels of automation where the human’s role is minimal (Anderson et al., 2016). Some jurisdictions are exploring frameworks that explicitly recognize the manufacturer as the “operator” when the automated system is engaged. This reflects the reality that coding decisions, sensor integration, and machine-learning models are what truly control the car at those moments.

Requiring manufacturers to take responsibility also aligns with broader public policy goals. If manufacturers know they will be held liable for injuries caused by defects in autonomous systems, they have strong incentives to invest in safety, transparency, and rigorous testing. Clear liability rules can also increase public trust in self-driving technology by reassuring people that they will not be left without recourse if something goes wrong. While insurance markets will adapt to distribute costs, the legal baseline should make clear that companies profiting from these systems must also absorb the risks of their failures.

In conclusion, as self-driving cars become more prevalent, the traditional model of blaming a human driver for every accident no longer makes sense. Liability should follow control. When automated systems make the critical driving decisions, manufacturers should be required to take primary responsibility for injuries caused by defects in design, software, or warnings. A flexible, hybrid liability framework can still recognize the role of human misuse or third-party errors, but the core duty should rest with the entities that build, program, and sell the technology. This approach not only reflects technological reality but also promotes safety, accountability, and public confidence in the future of transportation.

Back to Top