This week, Digital Future Daily is focusing on the fast-moving landscape of generative AI and the conversation about how and whether to regulate it — from pop culture to China to the U.S. Congress. Read our full slate of coverage, from AI vs. Drake and The Weeknd to how large language models fit into the doctor's office. ...And welcome back to our regular Friday feature, The Future in Five Questions. To cap off the week in AI, we have Kareem Saleh, co-founder and CEO of FairPlay AI. The company uses AI to ensure that the AI and machine learning systems used by lenders don’t discriminate based on race, gender or other protected characteristics. Among their clients is the New York state’s Department of Financial Services, which is trying to prevent unlawful discrimination in the insurance sector. Before joining the startup ranks, Kareem served in the Obama administration, first as chief of staff to the State Department’s Special Envoy for Climate Change and later as a senior advisor, managing the U.S. government’s portfolio of emerging market investments. Responses have been edited for length and clarity. What’s one underrated big idea? A fairness infrastructure for the internet that enables “Fairness-as-a-Service” — a set of on-demand tools, systems and protocols that can detect and correct biases in digital decisions in real-time. This is vital in the era of machine learning and AI, where algorithms increasingly influence everyday life — like whether you get a job interview, or a loan, or even a kidney transplant. What's cool about the Fairness-as-a-Service approach is that it allows organizations of all sizes to adopt fairness in their decision-making without the need for extensive resources or expertise. And it's kind of our collective responsibility to prioritize ethical technology development. We have all kinds of fairness protections and guardrails in physical space, right? Once upon a time, there were signs in shop windows that said, “No people of color allowed.” We had to adopt laws that made physical spaces more accessible to historically discriminated communities. It seems natural that as we transition to an increasingly digital existence, we will need a corresponding set of fairness safeguards. We're in the early stages of creating those safeguards. It used to be that you would go to a bank and sit across the table from a loan officer. That loan officer would try to make a judgment about your creditworthiness based on whether he knew your kids from school or from church. Just as we had to prohibit human loan officers from taking into account characteristics like race, gender, age, marital status, etc., we need a similar set of governance rules that apply to machines making these decisions. You see this in finance, in consumer lending, in insurance, in criminal justice, and in predictive policing. For those of us trying to make those machines safe for humanity, we need to put in place systems, processes and controls — just as we did in the physical world. What’s a technology you think is overhyped? These days, the conventional wisdom is that large language models — like the one underpinning chatGPT — are overhyped. Large language models are trained on the open web. That can cause them to make silly mistakes — or worse, use discriminatory or abusive language. There's no question that large language models need to be fact-checked, de-biased and hardened against hacking attacks. Even the most sophisticated AI companies struggle to get this stuff right. Just look at Google's roll-out of Bard. But the fact that these systems make silly mistakes today should not lead us to overly discount them. People forget that the iPhone was not terribly useful when it first launched. And Internet search engines often directed you to nonsense information in the early days. But both of those technologies spurred innovation that allowed imperfect tech to mature and eventually become essential to our lives. So, are we in the midst of a generative AI hype cycle? Quite likely. But for all of their failings, generative large language models have the potential to transform how we interact, how we learn, how we create. When our head of data science encounters skepticism over generative AI, he says, “Look, even if these systems only make each of us 10% more productive — that’s 10%, more productivity across a wide swath of humanity, from the CEO to the janitor.” That's gonna have a profound effect on competitiveness and innovation. What book most shaped your conception of the future? More than any other book, “7 Powers” by Hamilton Helmer has given me a very useful framework for understanding trends in the technology landscape. He's referring to the seven sources of durable, competitive advantage. I’m super obsessed with this book — it's just a very useful framework by which I evaluate product ideas, new business opportunity investments. What could government be doing regarding tech that it isn’t? I think we’re starting to see a more serious consideration of what the regulatory regime for AI should look like. And there are a number of difficult questions to be answered here. How should AI systems be governed? How do we assure ourselves of the accuracy of the data pipeline? In what industries and domains should AI systems be fairer? Some domains — like financial services, healthcare, employment and labor — have good model governance regimes that can simply be updated for the AI era. But in other domains like social media, education and criminal justice, I suspect we're going to need a new regulatory body to ensure the safety and reliability of AI systems. Look at the criminal justice system, where we had the controversial COMPAS recidivism algorithm that was shown to be racially biased. Now imagine a thousand COMPAS systems. How do you rigorously analyze that sort of AI system at scale? Traditionally, our economic system has been focused on innovation — letting a thousand flowers bloom. But what is really interesting is that even folks who are regulatory skeptics and industry participants who are quite knowledgeable about AI systems — say, Mr. Musk — are coming out and saying, “There's got to be some rules of the road here.” And another common saying is that, “Oh, well, we shouldn't over-regulate, because if other jurisdictions don't apply these regulations, we are at a competitive disadvantage.” But a couple of weeks ago, China put out a set of very, very strict rules on AI governance. The Chinese cyberspace agency is saying they are concerned that AI will undermine their national unity. If you look closer at the statement from the Chinese, they are concerned about some of the same things that we've been talking about. I think everybody is coming to the conclusion that these systems pose risks to the nation-state. We are likely to see a more global effort at harnessing the power of AI systems. What has surprised you most this year? We're seeing real advances in quantum computing hardware. Once, we thought we couldn’t build a 100 qubit system — now we're building 400+ qubit systems. That is an entirely different computational model from anything humanity has ever experienced. Since the invention of the microprocessor in classical computing, we had Moore's law saying if you increase the number of bits by a factor of 10, the amount of information you can process increases by a factor of 10. But in quantum, if you increase the number of qubits by a factor of 10, it increases the computational power 1000-fold. So computers of the future are going to be exponentially faster than the computers that exist today. Obviously, the big companies are focused on it, but the companies doing the most interesting work are still relatively small startups. The ability to process 1000x more information — that's gonna be world-changing. So I'm really excited about some of the new developments in quantum computing hardware. |