Steering AI Into The Next Phase

News Room

Judah Taub is Managing Partner at Hetz Ventures, a top seed stage VC based in Tel Aviv. He lectures on time-management & creative thinking.

In the early days of Formula One, it was all about the engine. Many thought that the better the engine, the better the car, the better the driver that a team could attract, and in summary, the more podiums and championships you would eventually win.

But this changed. As the engines continued to improve, it became apparent that other components of the car became increasingly important. Yes, being able to add more horsepower is helpful, but sometimes getting the aerodynamics right is even moreso.

Adrian Newey, a highly esteemed racing engineer, aerodynamicist and CTO of Red Bull Racing (and the guy who literally wrote the book, How to Build a Car), recognized the pivotal role of aerodynamics in enhancing the performance of a race car, especially when most other teams were entirely focused on the engine. His ingenuity and meticulous attention to detail in developing aerodynamic components, sometimes even at the cost of the engine, have not only revolutionized Red Bull’s cars but have also redefined the competitive landscape of Formula One racing.

But it is not just in racing where hitting a performance threshold shifts the bottleneck to a different component. Artificial intelligence is very much in that category too.

Accuracy

For years, the holy grail of working on AI software was accuracy, specifically in getting it to understand humans and respond in an intelligent manner rather than simply following an “if/then” rule system. With large language models (LLMs) now at play, and software systems such as GPT easily accessible, we have reached a critical performance threshold.

But as with Formula One cars, there comes a point where suddenly, this one dimension of performance (the engine for cars, and human-like intelligence for software) isn’t the only parameter. With LLMs, we have progressed to a point where the goal isn’t simply to get from 99.5% to 99.8% accuracy and then to 99.99%. The bottleneck has shifted to another variable, which we will gradually learn to appreciate—if you wish, the aerodynamics of the LLM.

Explainability

In LLM terms, that new variable is explainability. Given the way regulation is considered today, even with an accurate LLM, it will not be enough to simply be accurate if you cannot explain why it reached its outcome. This goes back to the car, where a potentially weaker engine but better aerodynamics might be better than being equipped with only the strongest engine.

In the case of LLMs and generative AI, these processes aren’t rule-based, the way software programs are. With a rule-based process, you can go back and find the logic. But here, if you make the same query three times, it may not come out the same each time. That’s part of being a generative, always-learning system. And it is nearly impossible to explain why exactly the system reaches a specific solution.

Understanding Why

Not knowing exactly why the GPT software suggests a draft for a news article, birthday card or piece of code completion software, is fine. But it gets more complex when the output is a decision where the process is hidden and just seeing the output doesn’t provide enough visibility as to why this was the system’s recommendation.

For example, when it comes to predicting models of whether to approve a mortgage or insurance policy, where biases trained inside the AI inform its outcomes, which have real life impact—both on the people waiting for their policies and on those running the insurance companies. Even if it gives you a better accuracy rate, you have to be able to explain why you turned someone down for a mortgage or policy—or risk relying on AI-driven choices based on biases, errors or worse.

Another example would be using an LLM to determine what healthcare treatment a patient should receive. Is it enough to know the system is very accurate but can’t be explained why sometimes, even if very rare, it suggests something completely wrong?

The Future Of AI

As a VC, I am seeing a lot of companies building on top of LLMs and AI; the market is flooded with companies scrambling to produce the most accurate outcomes for consumers and enterprises. As startup founder Elizabeth Zalman and investor Jerry Neumann write about in Founder vs. Investor, investing in these early technologies is about diving into the unknown while founders race to stay ahead of the competition.

Both investors and founders would be wise to stay aware and on top of the progression of AI and specifically what makes applications on top of LLMs truly valuable (i.e., the explainability factor). The landscape is changing—more rapidly than ever before—and building for what’s current doesn’t ensure that you’re building for what industries truly need, especially with regard to what regulatory standards they will have to live by.

The LLM landscape is changing rapidly and going forward, an LLM with the highest accuracy won’t get very far unless practitioners can explain how it works. That will be the key to becoming the most preferable and widely distributed.

The race is on.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share this Article
Leave a comment