Umnai on AI Trust

In the whirlwind of AI excitement in 2023, there's a reality check for organizations diving into AI implementation: trust issues. The latest AI models, including OpenAI's GPT-4, are known to generate inaccurate outputs. This poses challenges for critical tasks, where reliability is crucial.

Umnai on AI Trust

In the whirlwind of AI excitement in 2023, there's a reality check for organizations diving into AI implementation: trust issues. The latest AI models, including OpenAI's GPT-4, are known to generate inaccurate outputs. This poses challenges for critical tasks, where reliability is crucial.
26 January 2024

In the whirlwind of AI excitement in 2023, there’s a reality check for organizations diving into AI implementation: trust issues. The latest AI models, including OpenAI’s GPT-4, are known to generate inaccurate outputs. This poses challenges for critical tasks, where reliability is crucial.
CEO Ken Cassar of London-based startup Umnai acknowledges AI’s prowess in tasks like recommending products based on user behavior. However, he points out the trust barrier when dealing with complex decision-making, such as landing airplanes. The problem lies in the “black box” nature of AI, where it’s challenging to understand how decisions are reached.
ln the realm of Artificial Intelligence (AI), the concept of the “black box” has become a focal point of discussion and concern. This metaphorical black box refers to the opacity surrounding how AI systems arrive at their decisions or outcomes. As AI technologies become increasingly integrated into various aspects of our lives, the lack of transparency in these systems raises important questions about accountability, trust, and ethical considerations.


To address this, Umnai’s Dr.Angelo Dalli has created a “neuro-symbolic” AI architecture, combining neural nets and rule-based logic. This approach allows users to dissect the decision-making process, enhancing transparency.
Another player, Aligned AI, focuses on improving existing AI models’ reliability. Many AI systems fail when exposed to real-world data. Aligned AI aims to teach systems to generalize and extrapolate from their training, enabling continuous improvement with live human feedback.
While some startups target business opportunities by solving usability issues in AI, others, like Conjecture, address what they see as existential problems. Founder Connor Leahy emphasizes the need for controlled AI power. Conjecture proposes an alternative approach called “boundedness,” breaking down AI systems into manageable processes that users can combine for specific tasks. This design ensures that no part of the system surpasses human understanding.
As companies race to adopt AI, these startups are not only addressing usability challenges but also grappling with the ethical and existential implications of increasingly powerful AI systems.

In this post:

Leave a Reply

Your email address will not be published. Required fields are marked *

More Posts