Welcome back to our weekly feature: The Future in 5 Questions. Today we have Martin Markiewicz — CEO and co-founder of Silent Eight, a decade-old presence in the world of AI startups that builds machine learning and generative AI systems to help banks comply with sanctions and anti-money laundering statutes. Read on to hear Markiewicz’s thoughts on renewed confidence in generative AI tools from investors and financial institutions, why we need to push the conversation beyond large language models and actually proofing AI systems for high risk uses. Responses have been edited for length and clarity. What’s one underrated big idea? Pushing AI technology to make it safe to use in cases where it’s not safe to use right now. We’re all super hyped about ChatGPT and LLMs. But right now, even if I ask ChatGPT to help me with something or Github Copilot to give me some lines of code that don’t compile — it’s no big deal. I use it as an assistant and I am aware of its limitations. It’s designed to figure out stuff using natural language processing. But I wouldn’t trust ChatGPT to do a very hard computation. But there are a lot of use-cases for AI where the cost of making a mistake is actually pretty high — like every time a human life is on the line. These use-cases include self-driving cars or even financial institutions using it to track sanctions to prevent terrorist financing. Giving me a bad answer about the capital city of a country does not cost lives and billions of dollars in damages. The idea is that for use-cases where the cost of making a mistake is very high, we should have AI solutions that are capable of performing at the same magnitude. Autonomous weapons are exactly one of these use-cases. You just don’t want some model that can hallucinate to pilot a drone over some populated areas with bombs attached to it. If we’re doing this, then let’s have some certainty that we won't end up with some hallucination and a huge mushroom cloud. What’s a technology you think is overhyped? That large language models are absolutely enough to create an artificial general intelligence that will be doing everything everywhere and just replace us all. The whole transformers architecture is a completely new design from what we had previously. But to think that just one type of architecture will be able to deal with all the high risk and low risk use-cases for AI — that’s hype. That’s not reality. We still need to do way more and come up with smarter designs. I just don’t think LLMs are game over. It's almost like we switched from crypto into AI. Right now, it turns out AI is cool. But a year ago or two years or three years ago, when I was explaining to people what we do, they were like, “Okay, sounds boring.” Like — it was not cool. But now, when I talk to friends, they’re like, “Oh, you’re doing a generative AI thing! And you have a system that actually goes somewhere in the bank and makes all these decisions autonomously!” And I was like, yeah, I’ve been telling you this for the past few years. It's a very different vibe from the beginning, from when we were building this stack from those very early building blocks. And now it’s a feeling of: “Of course we're here.” Like, this is where we were supposed to be going. Natural language, computer vision — these were all problems to crack. And this is the technology to crack them. This is why we study this technology. This is why we invent new stuff. What book most shaped your conception of the future? “How to Create a Mind,” by Ray Kurzweil, which I read 11 years ago. I just started thinking about our brains as some kind of hardware, running some software. That’s similar to my PC — I can look inside my PC, but I don't know what's happening inside. But on the screen, I see the game that I play or a video that I'm watching. But it's just some electrical stuff when I look under the hood. It’s just a combination of hardware and software that makes my PC work. Maybe my brain works in the same way. So thinking about the brain like this, without creating too much magic around it — just as some hardware plus some software equals something awesome — that resonated with me very well because I'm a mathematician, so I already had a good pragmatic base. What could government be doing regarding tech that it isn’t? Not too much. I would not like to see harsh restrictions or some other things that will just stop the development of the AI field. My thinking here is that I would not rush and stop market participants from creating innovation because the things that we can achieve are not done yet. It's still a long, long road ahead. So if we just pull the brakes right now, we'll just get stuck in one spot for a really long time. And this would not be cool. We're talking about a technology that can be used for transcribing conversations, can be used as an assistant to go through long documents — to create babyAGI on Github. We probably could not think of these use cases not so long ago. Even for people within the industry — how do we even know how to guide others, when people are inventing so many impressive things with just a 100 lines of code? So if you allow people to be creative, they can create a lot of different use cases. What has surprised you most this year? That my parents understand what I do right now. After AI went mainstream, suddenly, it's not a mystery anymore. Back in the day, when I was meeting with investors and people from banks — our actual customers, who are anti-money laundering specialists but not necessarily someone who understands AI — I was looking for the easiest way to get through those first conversations. And the best thing that I came up with was to say, "look, this technology has been used in HSBC, First Abu Dhabi Bank, Standard Chartered" — I was just name-dropping all these customers that were using our technology already, because it built up credibility for the startup. I could not start with an explanation of how this technology is designed, why it's designed this way, how it works, etc. That came after some credibility was established. Now, there is no mystery in what we do anymore.That's absolutely surprising to me. I thought it would never end. I thought this is just our life, this is just how it works. And even my conversations with our early investors this year go something like: “We all knew that generative AI is the future.” These are funny conversations.
|