Loading...
Startups

AI Verification Startup ArbaLabs Tackles Trust and Accountability for Autonomous Systems and Edge Devices

02 Mar, 2026
AI Verification Startup ArbaLabs Tackles Trust and Accountability for Autonomous Systems and Edge Devices

As artificial intelligence becomes embedded in daily life, concerns are growing about how AI systems can be verified and who is accountable when failures occur (02/03).

Much of the global AI race has centered on improving system capability. Less focus has been placed on what happens after deployment, particularly as AI expands from cloud servers into factories, vehicles and infrastructure.

In physical environments, tracing what an AI system did — and proving it — can become critical.

ArbaLabs Develops Tools to Verify AI on Edge Devices

One startup addressing this issue is ArbaLabs, which participated in the K-Startup Grand Challenge in 2025 and finished in the final four.

The company develops tools to verify how AI systems operate on edge devices, which run AI locally instead of in centralized data centers.

Founder Ashley Reeves explained the company’s mission. “ArbaLabs builds a way to prove that an AI system is running exactly as it was designed and that its results haven’t been tampered with,” he said. “We focus on trust and accountability for AI in sensitive, real-world environments.”

He compared the technology to a flight recorder for AI systems. It creates verifiable records showing which AI model generated a result and whether that output was altered afterward.

“A normal AI system can generate a result,” he said. “Our system can prove which exact model produced that result and that it was not modified.”

Focus on Accountability, Not Judging AI Decisions

The verification tools do not determine whether an AI decision is correct or fair. Instead, they establish that the system operated as expected and that outputs were not tampered with.

Reeves said this distinction matters in industries where safety and liability are concerns.

He cited drones inspecting infrastructure or farmland. “The AI on that device decides whether something is damaged, safe or dangerous,” he said. “If that AI model is altered maliciously or accidentally, the decision could be wrong, with serious consequences.”

Sectors including drone manufacturing, autonomous vehicles, robotics and smart factories have shown early interest. In these settings, AI systems often function with limited direct oversight, raising questions when incidents happen.

In the United States, high-profile autonomous vehicle accidents, including a fatal self-driving test in Arizona, have raised questions about software versions and system conditions at the time of incidents.

“When an AI-driven system makes a fatal or near-fatal decision, investigations rely on logs and internal records,” Reeves said. “Without independent verification, it can be difficult to prove whether the deployed model was unchanged or properly calibrated.”

Policymakers Signal Growing Interest in AI Transparency

Policymakers in several jurisdictions, including Korea and the European Union, have indicated interest in stronger transparency and security requirements for AI deployments, especially in regulated sectors.

Standards are still evolving, but some companies are preparing in advance.

“We now have AI systems making decisions in health care or industrial automation,” Reeves said. “The question is no longer ‘Can AI do this?’ It’s ‘Can we trust it, verify it and assign responsibility if something goes wrong?’”

As AI systems move further into the physical world, attention is shifting from capability to accountability.

“Innovation is moving extremely fast and that’s exciting,” Reeves said. “But accountability mechanisms are still catching up. Trust should be measurable, not marketing.”



PHOTO: KOREATECHDESK

This article was created with AI assistance.

We make every effort to ensure the accuracy of our content, some information may be incorrect or outdated. Please let us know of any corrections at [email protected].

Read More

Please log in to post a comment.

Leave a Comment

Your email address will not be published. Required fields are marked *

1 2 3 4 5