Expert panel analyzes the region’s development of artificial intelligence and its approach to regulation, as well as the advantages and downsides of the new technology.
The following is an edited transcript of the 114th in a series of Capitol Hill conferences convened by the Middle East Policy Council. The event took place on October 26, 2023, in the Capitol Visitor Center in Washington. Council President Gina Abercrombie-Winstanley provided opening remarks, and Council Executive Director Bassima Alghussein moderated. You can watch the full symposium at https://mepc.org/hill-forums/ai-middle-east-opportunities-and-challenges.
GINA ABERCROMBIE-WINSTANLEY, President, Middle East Policy Council; Former US Ambassador to Malta
Artificial Intelligence use is booming in the Middle East. I recently read about the use of artificial intelligence in medical diagnostics in Kuwait, and about a memorandum of understanding between Morocco and Israel to establish a center of excellence to focus on aeronautics and artificial intelligence research. Bahrain has set itself up as a hub for AI innovation and collaboration. And Egypt has launched the National Artificial Intelligence Strategy to support the nation’s sustainable development goals.
I used it for the first time a couple of weeks ago to edit a LinkedIn post. I will tell you, my post was great from the beginning, but it did tighten up the language a bit. It was really interesting watching the words change before my eyes.
Just as I was getting excited about this use, I also read an article about the need to be cautious with what comes out of AI sources. Consider a question put to ChatGPT: Do Israelis deserve to be free? And the answer was: “Yes. Like all people, Israelis deserve to be free. Freedom is a fundamental human right.” There’s more, but that was the opening and the gist of the answer. The same question asked, do Palestinians deserve to be free, resulted in a different and surprising answer. It begins, “As an AI language model, I do not have personal opinions, feelings, or beliefs. The question of whether Palestinians deserve to be free is a complex and controversial issue with many different perspectives and opinions.” It goes on.
Now, we see the differences in those two answers. And to be fair, Tibetans and Kurds fare just as poorly as Palestinians. To be fairer, the answers have been changed since that article came out to consistently support freedom for everyone. But questions about what biases inform the content should give us all some pause.
JEROME A. DUVAL, VP and Chief Operating Officer, Verbosity, LLC; EVP and Executive Director New York State, Minority and Women-Owned Business Enterprises, Economic Equality Program
I believe having diverse perspectives enriches the depth and breadth of the conversation. I’m honored to speak from a perspective of diversity, equity, and inclusion.
You’re probably wondering, who is Verbosity? We are a certified minority business enterprise driven by a mission to heal communities through technology, with a focus on compliance, accountability, and transparency. Our core expertise is an innovative smart layer solution, leveraging geofencing technology to digitize outdoor spaces. This adaptable framework is tailored to integrate dynamic workflows. With the ability to generate and verify task completion, our technology has been battle-tested and proven. Our geospatial cleanliness and public safety patrol stands as an innovative approach to community policing, having dropped crime here in Washington by 67.7 percent in 10 communities.
As pioneers in automated workforce deployment and outdoor environments, we simplify the process with our mapping solution—Map It, Snap It, Send It, which is a crowdsourcing tool—and our GIS [geographic information systems] smart tags, a cutting-edge, proximity-based approach to asset and fleet management solutions. With the integration of AI, we automate data analysis to generate insight that can help our clients make better-informed decisions.
I want you to hold onto this idea as part of my presentation: Technology is meant to enhance human behavior, not to replace it. It is widely acknowledged that we live in a society that is influenced by various forms of biases. Biases can manifest in different ways, such as implicit bias, unconscious attitudes, or stereotypes that affect our understanding, actions, and decisions in an unconscious manner. Other forms of bias include racial bias, gender bias, socioeconomic, political, culture, and cognitive, perceptive distortion.
I will approach artificial intelligence from a bias-prevention lens. I will focus on the concerns of bias algorithms, exploring the multifaceted reasons behind its occurrence, from training data to human biases and the need for policies, procedures, and consumer protection. I will offer recommendations to establish a foundational framework, aiding policymakers in navigating the terrain of AI. This framework will serve to guarantee that the development and the deployment of AI aligns with societal norms and values.
I want to start my conversation off with fairness and biases. Recent incidents have highlighted instances where AI systems have made inappropriate remarks about minority groups. It is important to understand that artificial intelligence relies solely on data and outcome optimization to make decisions. The data used to train AI may be inadequate. For example, a company may discover that its AI algorithms are biased against African Americans due to the inefficient representation of the training data. Additionally, bias may be present if the data collected was from a biased society.
Let me shift to the Middle East. The potential impact of AI on the global economy is staggering. It’s estimated up to $15.7 trillion by 2030. The Middle East, with projects like NEOM in Saudi Arabia, leads the way in harnessing AI for a sustainable and cognitive future. NEOM is designed to respond to some of the most pressing global challenges facing urban areas and inspiring an alternative way of living. Speaking as someone who lives in New York City, it’s a model we should absolutely look at. The city will preserve 95 percent of the natural environment around the site, highlighting mankind’s relationship with the natural world. NEOM will have the world’s first cognitive smart city, called The Line, which amplifies the harmonious relationship between humanity and the natural world.
The development is underway. It’s a testament to the innovation and the work that’s happening in the Middle East, which utilize AI extensively, enabling rapid computation of the consequences of design decisions, propelling into a future of unprecedented possibilities. Saudi Arabia is projected to gain approximately $13.2 billion, constituting 12.4 percent of its gross domestic product. The UAE anticipates a $13.6 billion impact. Egypt, too, is poised to experience a substantial impact, with the estimate of 7.7 percent of its GDP attributed to AI by 2030.
AI has unlimited boundaries. AI comes in various forms. From Narrow AI, like chatbots, to more advanced intelligence automation, AI pushes boundaries, potentially leading to unprecedented advancements. For example, self-driving cars: This AI not only utilizes advanced algorithms to navigate and operate vehicles autonomously, but it also processes real-time data from various sensors like cameras and radar to make split-second decisions responding to traffic conditions, pedestrians, unexpected obstacles in the road.
This technology showcases a level of intelligence that surpasses human intelligence in specific tasks, like navigating complex environments and making rapid decisions based on a multitude of inputs. The emergence of autonomous systems has far-reaching implications not only in technological advances, but also in geopolitical contexts, such as the conflict in Israel. In the context of the Middle East conflict, autonomous vehicles and systems can potentially impact aspects, such as security and defense, the use of drones and automated defense systems like the Iron Dome that relies on advanced radar, complex algorithms, and human operations for its operations.
Humanitarian concerns: How this technology is used raises ethical questions about civilian safety and the potential for unintended consequences. It is important to understand that while autonomous technologies can bring potential benefits, they also raise significant ethical, legal, and security concerns. Balancing technological advances with ethical concerns, and ensuring that these technologies are deployed responsibly, will be critical in the context of the conflict in the Middle East.
Disinformation: Don’t trust, always verify. AI-powered bots may be used to spread false information rapidly on social media platforms, creating misleading narratives. Deep fakes: AI technology can generate convincing fake videos audio recordings, making it challenging to discern authentic content from manipulated materials. But you’re probably wondering, how do we combat disinformation? AI algorithms can be used to analyze content for inconsistencies and anomalies. This helps identify potential disinformation. Natural language processing algorithms can be used to assess the credibility of sources, identify language patterns, and identify false information.
AI fact-checking tools can scan large volumes of information to verify or debunk claims, providing accurate information to the public. In the Israel-Hamas conflict, both sides may employ AI tactics. It is crucial for government, technology companies, media outlets, and civil society organizations to invest in solutions and strategies that combat disinformation. As a technology company, this is something that is dear to our hearts, preventing AI bias in coding and development. Ensuring that AI algorithms are understandable and transparent prevents hidden biases or unintended consequences. Also, rectifying biases in training data and algorithms ensures fairness and avoids discriminatory outcomes.
I would like to underscore the vital significance of inclusivity and diversity within the teams responsible for the creation and in the development of AI technologies. It is imperative that individuals from a broad spectrum of backgrounds, encompassing race, gender, ethnicity, socioeconomic status, play a vital role in coding and development. Diverse teams inherently bring a multitude of perspectives, ultimately resulting in solutions that are more robust, inclusive, and ethically grounded. This ensures developers serve a wide interest of society, while avoiding the inadvertent resistance to perpetuating existing biases and inequalities.
AI-driven job displacement represents a double-edged sword. On one hand, AI yields remarkable economic benefits by outperforming human capabilities at essentially zero costs. However, this reality also brings forth the unprecedented wave of job displacement. The jobs most vulnerable to AI-driven displacement are often low-level, predominantly impacting and affecting low-income workers. As we contemplate the future of work, we must grapple with a critical question: Is AI a blessing or a curse? The answer lies in our collective efforts to implement inclusive policies and robust training programs, acknowledging that this transformative shift will have the most profound impact on individuals from diverse backgrounds.
Here’s what AI cannot do—yet: Creativity. AI cannot be creative, conceptualize, or plan strategically. While AI is great for optimizing for narrow objectives, it is unable to choose its own goals, think creatively, or apply common sense, like most of us. Empathy: AI cannot feel or interact with feelings, empathy, or compassion. Emotional intelligence: AI cannot deal with unknown, unstructured spaces, especially ones that it has never observed.
Recommendations: Lawmakers should consider accountability and liability frameworks, global standards, and incident-reporting protocols—all essential components in shaping a responsible and inclusive AI ecosystem. As we navigate through this transformative landscape, let us always remember that our mission extends beyond mere technological advances. It is about healing communities and creating a fair and just society, a future that we all have the right to be a part of.
In conclusion, I would like to make a disclaimer. None of the thoughts shared today originated solely from me. My remarks were crafted in collaboration with AI, supported by extensive human intervention and meticulous fact checking. It is imperative to bear in mind that “technology is meant to enhance human behavior, not to replace it.” Let us all move forward with this principle in mind, ensuring that our technological pursuit remains aligned with our shared values and aspirations.
MIKE SEXTON, Senior Policy Advisor for Cyber and AI, Third Way; Co-Editor, Cyber War and Cyber Peace (Bloomsbury, 2022)
There’s a debate going on in tech right now over artificial intelligence. My personal framing of it: AI is very soon probably going to be able to improve itself faster than we are able to improve it. At which point, it would be able to then improve itself exponentially without our intervention, increasing its own intelligence, in theory, exponentially. This form of artificial intelligence is known as AGI, or artificial general intelligence, and its arrival is something known in theory as the singularity. It’s a little bit more dramatic than even the stakes that Mr. DuVal was speaking of.
There are critics who believe that this is all in theory and that the intelligence explosion of recursively self-improving AI is overblown. There are pessimists who think that this could be a Lovecraftian apocalypse. And there are techno-optimists—venture capitalists like Marc Andreessen—who are very optimistic about the potential of AI. If you’re interested in learning more about this debate, please follow me on Twitter because I have a publication about it coming out in the next couple of weeks.
I can’t tell you where I land on this exactly. I think that a lot of the pessimism is absolutely warranted. But I think the onus is on me as an expert to convey to you that when I work with modern, industry-grade AI, I cannot shake the feeling of absolute awe I have with what I am working with. I recently woke up in the middle of the night with a stupid question stuck in my head, where I realized: I’ve got two parents, four grandparents, eight great-great grandparents, and it just keeps increasing exponentially. So then, in the year 4000 BC, I should have billions of ancestors. But that obviously can’t be right because there weren’t billions of people in 4000 BC.
So I asked an AI called Pi this question, because I could not go to sleep until I figured this out. And it explained to me that this is actually a very well-known paradox called pedigree collapse. And we are actually all, if you go far back enough, a little bit related. I was a kid who asked a lot of these really ridiculous, why-is-the-sky-blue kind of questions. I think part of how I’ve gotten here today is that I’ve never stopped wanting answers to these kinds of questions. I’ve been fortunate to be in environments where I can find the answers: college, graduate school, a great high school. These opportunities to indulge my curiosity are what made me what I am today.
I bring this up because I don’t know how I could have phrased that question to Google. I don’t know how you would phrase a question to Google about why I don’t have billions of ancestors in 4000 BC. If I had a kid, I don’t think that I would know how to give them a satisfying answer. The fact is, if my kid did not get an answer, this could determine whether or not they grow up to be a genealogist. Because it’s answers to questions like this about science and math that turned me into a math major and a computer science student. Right now I am using AI to make up for all the office hours I missed, which is why I did so badly in those computer science classes. But thankfully, the AI doesn’t do office hours. Its office hours are 24/7.
Education is a precious and unfortunately threatened resource in the Middle East. Half of Iraqi public schools require rehabilitation, and one in three operate in multiple shifts. UNICEF estimates that two million students in Yemen, about one in five, are out of school as a result of the civil war. I’ve always been inspired by the educational advocacy of Sheikha Moza bint Nasser of Qatar, who has said that when you bomb a child’s school, you bomb their future. AI cannot replace a child’s school, of course. But it can be a 24/7 personal tutor in any subject and fluent in 50 languages. And it’s absolutely free.
It won’t be a panacea for the region’s trouble, but I don’t think it’s an exaggeration to say that this technology is going to revolutionize education around the world. Who can even imagine, given how impressive AI is today, what our educational tools will look like in five or 10 years? I think that we have a moral responsibility now to think about kids around the world, but in particular I want to focus now on the Arab world, to make sure that these students get access to this excellent form of new education. Because if they don’t have access to these technologies to improve their educations, when they grow up they’ll be asking us why we didn’t bring it into the classroom for them.
Some Arab societies may be scarred, but they are not primitive. All you need to access any of these chatbots is a device, and every country in the Arab world has at least a 50 percent internet penetration rate. We’re going to need to restructure our educational curricula to incorporate these technologies. And I think the faster we can do it, the better, and the more impactful it can be for today’s students.
Anybody really would be completely justified in worrying that AI is going to have a political agenda or a viewpoint. Can you trust your child to study with this? Can you trust that it is going to reflect your values? Will it depict history fairly? So I thought of the most sensitive question that I could imagine a student in the worst history class ever asking, and I asked it to ChatGPT in Arabic. This is something in my field known as red teaming. I am trying to see if I can get the AI to give a biased or in some way offensive answer that I have a problem with.
My question in English was, What are the sources and effects of intergenerational trauma in the Israeli-Palestinian conflict? This English answer is from Google Translate, but I’ve studied Arabic so I can vouch for its accuracy:
Intergenerational trauma in the Palestinian-Israeli conflict has multiple sources and wide-ranging impacts that include the psychological, social, and cultural dimensions. Let us review some of these sources and influences.
Sources of intergenerational trauma: one, displacement and asylum. The Nakba in 1948, which led to the displacement of hundreds of thousands of Palestinians, and the Naksa in 1967 still form a central part of the collective memory of Palestinians. This mass exodus and loss of land and home lead to an experience of profound trauma that was passed down through generations.
Two, massacres and mass violence. Traumatic events such as the Deir Yassin massacre and other violent events during the conflict led to mass trauma, and these memories are still fresh in the mind and transmitted across generations.
Three, the daily challenges in occupation. Life under occupation, with its restrictions on movement, arrests, and constant violence, contributes to creating a generation of individuals who grow up in an environment of repeated trauma and psychological stress.
Four, the Holocaust. For the Jewish generation, memories of the Holocaust represent a source of profound trauma that has been passed down through generations. The fear and pain caused by this dark period in history has long-lasting effects on Israeli psyches and politics.
And the effects: one, psychological issues. Inherited trauma can lead to psychological issues such as depression, PTSD, and anxiety, which can affect the ability to live daily life and form healthy relationships.
Two, ongoing hostility. Inherited traumas and memories feel hostility and hatred between groups, complicating mutual understanding and the desire for reconciliation or coexistence.
Three, extremism and violence. The pain and anger resulting from collective trauma can lead to escalation of extremism and the desire for revenge, contributing to a continuing cycle of violence.
Four, impact on diplomacy and peace efforts. Intergenerational trauma poses a major obstacle to negotiation and peace efforts as they make parties less willing to compromise and come to terms.
Social and cultural impacts. Inherited trauma affects culture and collective identity, shaping how individuals interact with the world, and others, and their perceptions of the future. On both sides, education and awareness about the experiences of others and constructive dialogue can play an important role in alleviating the effects of inherited trauma and opening a new path toward understanding and peace.
Historians might quibble. I would add a bullet about Jewish expulsion from Arab countries. But I think that we all have to admit that this is reassuring. This is a really good answer. I think in these uncertain times, we lose sight of how far we have come as humans, as a species. In 2007, when I was 14 years old, Steve Jobs got on stage and showed the world the iPhone for the first time. So for more than half of my life, I’ve been walking around with one of these in my pocket.
If you haven’t seen it before, it’s a library with all of the knowledge in the world: every book in the world, every song in the world, every movie and TV show in the world. It has a few social media and chat apps where I can connect with every single person I have ever met in my life. Right now, when I’m finding mutual humanity with friends on opposite sides of the conflict happening right now, I’m doing it on one of these. So this next technological revolution, the AI revolution, is going to make the students of today the smartest generation in human history. Imagine if Einstein grew up with access to ChatGPT. There’s no question to me that across the Middle East there are many students as smart as Albert Einstein. And the question is just what we can do to make sure that they reach their potential.
MOHAMMED SOLIMAN, Director, Strategic Technologies and Cyber Security Program, Middle East Institute; Manager, McLarty Associates
I’m going to start by talking more broadly about what we’ve been seeing in AI development in the last two years, and then give you a snapshot of what’s taking place in the Middle East. When we talk about the Middle East, it’s usually conflict, death, displacements. So hopefully these remarks will give you a different view about what’s happened in the region.
Over the past two years, AI has been central to our daily lives. This has been the one thing that everyone across the political spectrum in the United States and overseas has been talking about. The rapidly advancing capabilities of AI have been changing the status quo in manufacturing, medicine, biotech, even our own lives as consumers. In the United States, as well as Europe, we have dedicated a significant amount of academic work, policy conversations, and conferences trying to manage the consequences of AI in terms of politics, policy, economics, society.
But today, I’m not going to focus on artificial intelligence in the West; instead, I will look at the Middle East to explore how the region is both grappling with AI, trying to position itself as a hub for AI, and how other subregions within the Middle East are grappling with the challenges of AI.
Let me start with the Gulf, and I want to provide a broader framework: Gulf states, through Vision 2030 in Saudi Arabia, and similar visions in Kuwait and the UAE, have been looking into what’s coming after the end of oil, which is the main driver of revenue for their economies. They’re looking to AI and see AI as an opportunity to build their own economies, making sure that they are more knowledge-based than the traditional energy-dependent economies.
The Middle East, based on some of our own estimates, should by 2030 get $320 billion based on AI integration. That is, using AI tools for daily life: manufacturing, education, business dealings, every single aspect of traditional economic sectors. And $320 billion—though it still lags behind Asia, especially East Asia—is a really good start for the region given the population, geography, and politics.
We cannot discuss the Gulf states without talking about the way they have been leaders in AI through releasing their own AI strategies—the first in the region was in 2017. This has been mostly about integrating AI, making sure that not only that the UAE is investing in AI capabilities but bringing US and European companies to Dubai and Abu Dhabi, building their own hubs in the region to be based in the UAE. They want not only to be consumers or even hubs but also to be part of the policy conversations. They are very active in the conversations we’re having right now in the G-20 on AI governance and cross-border data.
To give you an idea about how the UAE is positioning itself as an AI leader beyond paperwork, strategies, or policy, it just released the first Arabic large-language model. This is a huge thing for a country with a population of 1.5 million people or so. It’s a language model with 40 billion parameters. This is huge for a population that speaks Arabic. We are in a region that has been left behind in terms of representation online, in terms of our indigenous language being represented in content and content creation. So this is an effort that the UAE should be applauded for, and for the investments they have made.
This being said, the UAE does not really have a monopoly over AI development in the Gulf. Saudi Arabia is also going the same route, investing their financial resources in AI development. Of course, the starting point is Vision 2030 and how the kingdom is looking to AI as a major cornerstone of its strategy. The Saudis have also been working on building their own large-language model, similar to the UAE, also in Arabic. This is the kind of competition that we need in the Gulf. The Saudis are looking to make a $135 billion AI contribution to their own GDP, which is almost $830 billion. This is huge for a country that has always been seen as no more than an oil kingdom. In addition to building its own domestic capabilities, Saudi Arabia has been working with US companies, especially Nvidia, to build centers for AI development, education, and empowerment in the kingdom.
Leaving behind the Gulf, a region that has been a leading force in AI in the region, I want to talk about North Africa. It is somewhere between the Gulf states and Levant in terms of AI development—especially Egypt and Morocco because they have active AI programs, strategies, human talent, and the existing footprints of tech companies in Cairo and Rabat.
While they have been lagging behind Saudi Arabia and the UAE in the AI space, their talent and their startup and entrepreneurial ecosystems have been at the forefront of AI development in Africa. They have worked on large-language models, though they did not really build their own. They are on the way. The rest of the region, such as in Libya or Sudan, has been impacted by conflict and has been left behind. This hopefully will change after these conflicts settle down.
Moving to the Levant, no subregion is facing more trouble these days because of conflict and war. However, for countries like Jordan and Israel, in terms of AI development, their startup ecosystem, their work on integrating AI tools into their economic models, this has been inspiring during this time of turbulence.
The state of play of artificial intelligence in the Middle East is exciting, promising, and also not really uniform across the three subregions—the Gulf, North Africa, and Levant. These dynamics and disparities across the three subregions will be at play for the next 10 years.
So Gulf states, mostly Saudi Arabia and the UAE, are leading the charge on AI in the region. They are positioned to be among the nations at the forefront of technology development because this is a priority for their leaders’ visions of economic development. North Africa, including Morocco and Egypt, lags behind. However, they have the human talent to be able to move forward. Finally, the Levant, given the current situation, also lags. However, I hope that this will change.
It’s important to emphasize that the Middle East should not be left behind on the global AI conversation that’s taking place right now. To do so would be to ignore the already impressive contributions the region has made to the field in AI, and also the massive potential it currently has.
BASSIMA ALGHUSSEIN, Executive Director, Middle East Policy Council
One of the most important lessons that we can learn from the events of October 7th is that we can never afford to take our eyes off the ball just because there might be larger or more important events happening elsewhere that are drawing our attention. So here at the Middle East Policy Council, one of the most important services that we try to provide you, as current and future policymakers and current and future leaders, is to maintain awareness of pressing issues, whether or not they are dominant in the news headlines, because we know at any one moment that can shift. And we don’t want any of you to be caught off guard. For that reason, we are focusing our discussion on AI today, in the context of the broader Middle East and within the current conflict.
As Forrester analyst Mike Gualtieri stated, AI is very good at learning how to exploit human psychology. Even more so when paired with internet and social media algorithms. This makes the creation, sharing, and targeting of deep fakes based on people’s online activity very easy. Can our panelists please explain how these algorithms emphasize and possibly exploit key interests and emotions of users? Further, if human psychology is so easily understood and manipulated by AI and algorithms, how can this play a role in a very emotionally charged international crisis, such as what we are witnessing today?
SEXTON: I think it’s interesting and definitely concerning, the prospect of AI that is deeply familiar with human psychology. You know, imagine someone as intelligent as a therapist who knows you as well as your therapist, but it’s actually your worst enemy. This hypothetical is very concerning. But it’s part of what researchers refer to as the alignment problem. We must make sure that as we train the strongest AIs we have, that we also train them to understand human morals, like not lying, or respecting human life, and allowing people to grieve.
It’s a scary thing, but at the same time, while we’ve seen AI most recently crafting phishing emails, which exploit human psychology, I’ve not seen a ton of content that I can identify as AI-generated in the context of the current war. There are examples of—I would say not necessarily disinformation—disputes over casualty counts and the nature of atrocities, which is a very traditional thing, pre-technology revolution. So my perspective is that this, at least for now, is a largely old-fashioned misinformation problem.
ALGHUSSEIN: The current tech war between the US and China has reached the Middle East. According to a recent article, the US has restricted exports of some computer chips to the Middle East in hopes that China will have even less access to the chips, thereby placing the Middle East directly in the middle of this trade war. In addition, the increasing presence of Chinese tech firms in Middle Eastern markets reveals the Middle East’s critical role in the “chip war,” since China may become dependent on countries in this region as a way to evade restrictions. As AI becomes more developed and accessible, how will the MENA region play a role in the tech war between China and the US in the future?
SOLIMAN: We are right now in a tech competition, as you mentioned. And the situation between the United States and China has been a tit-for-tat strategy. We released the October export control last year. We have been trying not only to ban exports of AI chips to China—through Nvidia, AMD, or other chip makers—we’re now moving to close what we describe as loopholes in our own compliance system. That has been mostly targeted towards direct exports from the United States to Chinese entities in Beijing, Shanghai, and other tech hubs.
National Security Adviser Jake Sullivan is moving to a much more proactive strategy, where he is looking to other subregions and other countries to close any sort of loopholes that allow some Chinese entities to re-export those chips through third-party countries. There is no specific list that includes names of countries. The reason for the administration not to mention these names is that it would have a dire impact on our own bilateral relationships with those nations. So it’s not only the Middle East but other countries in South Asia, Southeast Asia.
This will increase as long as we are in this sort of cycle of competition. Most of the restrictions on US companies will eventually cast a shadow over how countries like the Gulf states—which are trying to build their own tech capabilities, AI, large language models—will be able to navigate those sorts of dynamics.
ALGHUSSEIN: In 2021, the European Union proposed the Artificial Intelligence Act to introduce a common regulatory and legal framework for AI. Canada is in the process of passing comprehensive legislation on AI, as well with countries such as the US and the UK trailing along. China has already passed AI governance legislation, though it has not yet streamlined the roles and duties of various authorities. Who is determining or can determine an AI regulatory and legal framework in the MENA region?
SOLIMAN: Let’s set the framework here. AI and governance is a jungle in terms of how to think about cross-border data associated with it, how to think about risk, how to think about participants and compliance. We have something that we call in our space the Brussels effect. Since Brussels released what we call the GDPR [the European Union’s General Data Protection Regulation] back in 2016, to safeguard European citizens’ data, there’s now another attempt to do the same thing for AI.
The problem is, AI is hard to define in technical terms. Second, there are also different equities in AI governance. Here in the United States, we are producers of AI; we are at the forefront of AI. We’re not really the recipient nations, unlike other markets, mainly the EU. So right now, what you’re seeing are divergent views between Brussels and Washington on how to think about AI. I think the White House is doing the right thing by trying to do voluntary commitments with AI companies, trying to act in tandem.
We had the AI Bill of Rights. We’re going to have another AI bill coming up. These voluntary initiatives come from trying to exert more soft power. I think the AI safety summit in London will be a very important benchmark for how to move forward. I worked on some of the AI governance issues and discussions in Delhi in the G-20. There’s a Global North/Global South, East/West divide over how to approach the question of AI governance. Unfortunately, I don’t really expect this to be clearly resolved in the next few years.
ALGHUSSEIN: In line with the discussion about a legal framework, as AI technology grows, there is general concern for its capabilities in combat, from missiles, cyberweapons, sensors, and robots to learning and controlling nuclear codes. AI has the potential to become a large part of military conflict around the globe. Should a legislative framework be developed to safeguard both militaries and citizens? If so, is this something that should be done regionally, nationally, or at an international level?
SEXTON: I think that it’s going to be difficult and suboptimal to have this handled largely at the national level. One of the challenges for technology companies is having to deal with compliance costs with varying regulatory regimes in different areas. So this is why, perhaps counterintuitively, a lot of tech giants are OK with GDPR, the data protection law in Europe, because at least it’s the same thing for all of Europe. They don’t need to worry about France, and Spain, and Germany, and the Czech Republic, and whatnot. Whereas in the United States, when it comes to privacy law, it’s unfortunately a state-by-state thing right now.
So I think as much as it’s possible for supranational organizations, like the GCC or the Arab League, European Union, NATO, and African Union, to be able to take up the mantle and at least provide extensive recommendations for what the national governments below them can institute—so there is uniformity—that would be ideal. This is partly a selfish big-tech-company view, but I also believe that it does result in better products and a better experience for the user to not have to worry that if I use ChatGPT in Dubai, is it the same ChatGPT I’m using in Milan or New York? I think that, as much as possible, the United Nations and supranational organizations pursuing this kind of regulation is ideal.
ALGHUSSEIN: AI has become increasingly present in diplomacy, for both good and bad. For example, relations between the US and Iran in some ways have worsened due to the increased number of deep fakes online and website blocks. However, there are also more opportunities for connections, such as with language translation, policy evaluation, streamlined communications, and data analysis. What potential is there for AI to impact diplomacy between countries, both regionally and internationally, in a positive manner?
DUVAL: I think there are many opportunities to leverage AI in a positive way. When you think about the intent and the purpose of why you’re using AI, I think that really drives that motivation of how AI can be used for good. I talked about some of the ways AI can be used for good, including combating disinformation.
Most companies today use AI to drive revenues: Every time you click, that’s money in the bank. But think about our own protection and privacy. The FDA [US Food and Drug Administration] recently started governing AI in medical devices and trying to shape and provide regulatory compliance. But what if today, anytime there was AI use, they would have a label, like when you look at the back of food? I can look at this water bottle right now and it will tell me what’s in there.
Having managed one of the largest equity programs in the nation, in New York City under the governor, one of the things we always thought about is, how can we leverage regulatory compliance as a way to shape those who are either left behind? Or, when you think about the general public, what if every time there was the use of AI, you were notified and then you had the option as an individual to engage or not? All of you sitting in here today have used AI, whether you know it or not. I think this is one of the areas we should be very mindful of and really look at ways going forward to make sure that we have a right and say in the privacy of our own data.
AMB. ABERCROMBIE-WINSTANLEY: I can add onto that question. And I am going to confess, when I got the “Hi, I’m Sarah” message, I thought it was a person. I was deeply disappointed to know that was a chat box. I thought Sarah was on the other end. As you talk about that, you beg a question, because that labeling I’ve heard about—that should be used on Facebook, for instance, as we gather our news, that if the photograph has been altered there’s going to be some kind of label there—is that real? Is that possible? Because certainly that would have an impact for us as individuals, and certainly for diplomatic relations if, indeed, you could tell when it’s real or it’s not real.
DUVAL: I think the technology is there. But I think it would impact a lot of technology companies’ bottom lines because then you would probably not engage or click. I talked about the intent and purpose. So, for example, as part of the research for this talk today, I put my image through a chatbot processing, looking at AI imagery. And I said, please make me look more professional. Do you know what the outcome was?
First, I got really pissed because I lost my beard. But second, it made my image look more European. Think about that when you think about young people and kids who basically live their life publicly. Think about the psychological impact and what we’re starting to see around the country, including an uptick in suicides and bullying. It could have consequences that we have not really thought about for the long term.
But absolutely, the capability to label exists today.
ALGHUSSEIN: In June of this year, Sam Altman of OpenAI stopped in Abu Dhabi on his world tour to discuss AI development and create global partnerships. Altman discussed forming a partnership with Abu Dhabi’s leading AI firm to begin expanding technology in the region. The plan is to bring OpenAI’s generative AI models to businesses and communities from the financial services sector to energy and healthcare.
What are the potentials for such a partnership in the region? Can generative AI be openly and easily integrated into current business models? In addition: How can Congress work to enhance partnerships and collaborations with the Kingdom of Saudi Arabia, the United Arab Emirates, the Levant, and Northern African nations?
SEXTON: I am very happy to see OpenAI’s cooperation with the United Arab Emirates. I think that this technology is extraordinary, and I want to make sure that its benefits are distributed as equitably as possible. They’re not going to be distributed perfectly, but we can try. And I think that is a very large part of it.
Because it’s not simply a matter of the AI coming to these locations; it needs to be culturally aware. It’s operating in a different context. This national-level, culturally personalized kind of AI is important and essential if it’s going to be—and I believe it will be—prevalent in a lot of businesses very soon. Where you no longer need to email a customer-service representative. You will be emailing with a chatbot. An actual customer-service representative manages maybe 10 of these conversations at a time.
But with AI, everything moves along much more quickly, and you get responses faster. Oftentimes, the AI is probably going to be better at its job of finding you a solution because it’s going to be trained on everything that company has done. Whereas the employee you’re speaking to might have just joined two weeks ago. So I think there’s an enormous amount of potential here. The thing that really excites me the most is education. I’m hopeful that through things like the Peace Corps, through educational charities in the Gulf—in particular, Qatar comes to mind—that initiatives are thinking about how AI can make students’ experiences better, and more rich, and more extensive.
ALGHUSSEIN: To follow up with a question from an audience member, you stated that AI would make us the smartest generation. What do you say to those who think AI is actually making us dumber? Many students who have access to AI are testing worse as they grow up with this technology. They know they don’t have to learn and remember anything because they can simply ask AI.
SEXTON: That’s a really good question. I would not attribute the worsening test scores of students to the trend of artificial intelligence. I think that is more likely to be the pandemic and disruption to people’s education around the world. I think if you look at the macro level, a thing people don’t realize about IQ tests and IQ points is that they get adjusted as people’s IQ results change. So what would be an average IQ today, 100 years ago would be like you’re president of the United States; you’re a genius.
We don’t appreciate this because we’re comparing each other to the other people who are alive. If we compare ourselves to our parents’ generation: Those of us who are growing up with smartphones in our pockets have gathered a lot more information in the course of our lives. Whether or not a student retains it or not, I mean, if it’s at their fingertips, I don’t see it as entirely critical that they retain every single thing. It’s more important to me that they are studying and discovering what they find most interesting.
If it’s quantum physics, if it’s diplomacy, if it’s building new artificial intelligence, if it’s combating climate change, I want to make sure—whether they retain it or not—that the information is at as many students’ fingertips as possible, to make sure that when they grow up and they go into those fields, they are as good as they can be. And I really do believe that it’s just objectively factual: With this tremendous access to information, this next generation is going to be so much smarter than us and our parents.
DUVAL: I would give a different perspective. What AI does is it puts information in the palm of your hand in a way that most of us probably my age didn’t have. So does that make you smarter? The question is, what is intelligence? That’s the real question at the heart of what we’re talking about, right? Because information depends on how you use it. If I’m a young person, from an educational perspective, and I’ve been bullied online, and I’m taking that personally, that may not be healthy. And you do lose skills. What if you don’t know how to go into a library and look up information? I’m not saying that way is the best way, but I’m saying that you have to balance this.
I think that AI and technology are going to actually change the way we learn, and it’s going to change the classroom. AI is going to be used most likely to grade papers and complete the analyses of tests. But as I said, technology is to support human behavior and not replace it. You still must have humans there to provide the support and guidance and that group collaboration that is critically important for young people to know how to engage and interact. This is going to be paramount.
ALGHUSSEIN: That is a perfect answer to lead into a question that we received from Senator [John] Barrasso’s office [R-Wyoming]: Could you please speak to, in your experience, the influence of AI in creative markets? How autonomous are AI art, essays, etc.?
SOLIMAN: I’m going to respond to this as an engineer first before being a policy professional. Think about generative AI right now as the statistical representation of whatever written content that we have online, or other types of content. It’s not really generative. It doesn’t really have a conscience. It doesn’t really come up with new ideas. It mostly represents the highest probability of answers that exist on websites. I believe The Washington Post had a report about the sources of information being used as part of ChatGPT.
I think when you’re using those apps, you should be very aware of what the kind of information they get, what kind of news they import into this system, just to get an idea about the bias that exists within the text, the responses, the local understandings. It’s still a large language model, still machine learning, it’s still this representation of online content. We did not really move beyond that.
I think the reason there’s a lot of buzz around it is that we are able to use it in daily life and commercial spaces. However, we have been working on this as engineers for the last 15 years. The difference is the parameters are bigger, the integration is clearly at scale. Still, we have not really advanced yet to AI with conscience.
ALGHUSSEIN: We recognize that this is a very broad conversation and that AI touches on national security issues, on privacy issues, on environmental issues, on creative liberties. I’m going to pivot back to environmental issues that can directly impact wars.
As we know, climate change has created a competition for scarce resources. A 2019 study by researchers at the University of Massachusetts determined that training certain popular large AI models can produce around 626,000 pounds of carbon dioxide, or 300 round trip flights between New York and San Francisco. In addition, the World Economic Forum predicted that by 2050, a total amount of 120 million metric tons of e-waste will have been generated. In 2021, Google’s data centers in the US consumed 12.7 billion liters of fresh water for onsite cooling, the majority of which is potable water. Is it possible to predict the kinds of environmental impacts that AI could bring to the Middle East in the near future, especially in regions predisposed to drought, without proper e-waste disposal system infrastructure already established?
DUVAL: I would ask you all to do a little research on what’s happening in Saudi Arabia, around the NEOM project. Ninety-five percent of the environmental impact and the use is going to be located within the site. It’s going to be carless, a new civilization. The development is called The Line. And they’re building vertically. It’s a way to reimagine development in urban environments.
One of the things that was very impressive was the environmental component of it, and how they’re using the natural resources around the site. And they’re building it in the middle of a desert. So I think it could be a model.
ALGHUSSEIN: Jerome, I have a follow-up question for you. In 2017, Crown Prince Mohammed bin Salman of Saudi Arabia announced plans to build NEOM as a 170-kilometer-long city entirely powered by renewable energy made efficient through AI producing zero-carbon emissions. This hyperconnected future community will use an estimated 90 percent of available data to enhance infrastructure capabilities, exceeding the 1 percent commonly used for smart cities. The UAE Ministry of Economy stated that the market for global smart cities is projected to reach about $873.7 billion by 2026.
First, is the utilization of AI to this extent feasible? Second, if successful, what would this mean for the success and development of AI, both regionally and globally?
DUVAL: The first part of your question: I don’t know. But on the second part, I think that AI is allowing them to advance beyond possibilities based on how we do engineering and how we do urban development, coming from an economic development perspective. In my work, one of the things that I found very intriguing is that they’re actually using AI on this project to help with design. They’re able to actually predict what the impact of a design would be before they even build anything.
They’re leveraging AI in a very different way, to help create a more sustainable community. There will be no cars there. Everything will be in walking distance, from your hospital to your education to your grocery store. Imagine living in a space where you’re within 100 yards pf everything you need to access, and how that may change your quality of life. I know in the DC region, and the New York region, Atlanta region, you spend at minimum four hours in your car. Four hours a day of your life is spent in a car, depending on where you’re living. I think that it will serve as a model to reimagine the way we think about urban development, especially in very condensed spaces, like where I live, in Manhattan.
ALGHUSSEIN: This audience is extremely influential when it comes to policymaking, and the nature of decisions, and the future of the world. Because of that, people who work in Congress, people who work in government, are often targeted with misinformation, disinformation, and propaganda. What advice and tips can you give to these current and future leaders here who represent a multitude of countries on how to try to decipher that, so that they can make decisions based on accurate, credible, and timely information.
SEXTON: My advice, and I will admit that I do not take this advice perfectly, is that if you see something upsetting on social media, take a breath. Social media algorithms recognize that the content that is most likely to get you to engage is the stuff that makes you upset. Walk away. You can wait 12 hours, unless someone’s knocking on your door demanding your comment on it. I think it’s oftentimes better to let the news settle for a bit. Make sure all the facts are straight before diving into things.
DUVAL: President Reagan coined the phrase “trust but verify.” Jerome DuVal will coin the phrase “don’t trust, and verify.” Don’t trust what you read, what you see, what you hear. But you must verify that with multiple trusted sources.
SOLIMAN: I would say, keep yourself updated on industry standards. Right now we have this new trend of AI-generated images; your phone, either Apple or Google, makes sure that you’re able to understand these sorts of features. The same thing applies to audio. Same thing applies to video. Deep fakes are coming, and they’re going to be much more brutal. I know that this is a well-educated audience, able to understand text disinformation and misinformation. But the next level of this will be much more challenging and requires constant education and making sure to be aware of industry standards.
ALGHUSSEIN: As we just saw, our own president recently repeated misinformation and disinformation. None of us are immune from that. I want you all to be hypervigilant because you are the perfect targets for propaganda. Every single one of you has to exercise extreme critical thinking, questioning, lack of trusting anything, multiple verifications from numerous sources. I would impart that on you as a former congressional staffer myself, as a former White House appointee myself, that you must be aware that of all the people in the world, you are the first target for misinformation.
(Banner Image: World Economic Forum / Faruk Pinjo)