Posted inEvents

Key takeaways from Amnah Ajmal, Executive VP of Market Development, Mastercard at Digital Future Forum 2024

Discover how ethical AI development can drive job creation, societal inclusion, and sustainable progress while emphasising transparency, fairness, and a human-centred approach, as per Amnah Ajmal’s, Executive VP of Market Development, Eastern Europe, Middle East and Africa, Mastercard, keynote address.

Amnah Ajmal’s, Executive VP of Market Development, Eastern Europe, Middle East and Africa, Mastercard

In November 2022, the technology world as we know it transformed. OpenAI had then launched its unique generative AI (genAI) model, ChatGPT. While artificial intelligence has been around since the 70s, this new model has democratised it, making it easy for anyone to use AI.

The results gave ChatGPT one million users within five days of its launch in November 2022. To run a comparison, Instagram took 2.5 months to reach a million downloads, while Netflix had to wait 3.5 years to get that number. Today, genAI and different AI models are reshaping industries, societies, and economies worldwide.

Technology today is evolving faster than ever, and it is imperative to understand the different facets of AI adoption; this includes the leadership required to drive innovation. All this while keeping the psychological, social, and ethical considerations that must be addressed.

It was exactly what Amnah Ajmal, Executive Vice President of Market Development, Eastern Europe, Middle East and Africa, Mastercard spoke of at the opening keynote of the second edition of edge/ Digital Future Forum – Bringing in an AI frontier.

Here are some of the key takeaways from Ajmal’s session:

UAE’s Position in the Global AI Landscape

      The UAE is pioneering in AI by being one of the first countries to appoint an AI minister, showcasing its commitment to leading AI innovation. This strategic move aims to skill millions in AI, positioning the UAE as an AI research and development hub. The UAE’s AI strategy includes transforming public services, healthcare, education, and infrastructure, attracting global talent and investment. This leadership sets a competitive precedent and serves as a model for integrating AI responsibly and effectively into society.

      Understanding Negativity Bias

      Negativity bias is a psychological phenomenon in which humans focus more on negative experiences and stimuli than on positive ones. This bias is deeply rooted in our evolutionary past, when early humans needed to be highly attuned to potential dangers for survival.

      The brain’s response to harmful stimuli is more intense and prolonged, increasing awareness of threats and adverse events. Scientific research supports this, showing that negative images and stories activate more brain areas than positive ones.

      This heightened sensitivity to negativity can impact decision-making, relationships, and well-being. However, the concept of neuroplasticity offers hope. Neuroplasticity is the brain’s ability to reorganise and form new neural connections throughout life. Individuals can rewire their brains to focus on positive experiences and develop a growth mindset through deliberate practices such as mindfulness, gratitude exercises, and cognitive-behavioural techniques.

      This process involves consistently reinforcing positive thoughts and behaviours, which can eventually reduce the impact of negativity bias. For leaders, understanding and mitigating negativity bias is crucial for fostering a positive organisational culture and promoting innovation.

      By emphasising strengths, celebrating successes, and encouraging constructive feedback, leaders can create an environment where employees feel valued and motivated, enhancing overall productivity and well-being.

      Amnah Ajmal’s, Executive VP of Market Development, Eastern Europe, Middle East and Africa, Mastercard

      Historical resistance to technological change

      History shows that resistance to technological change is a natural human reaction. The transition from horses to automobiles faced significant opposition due to safety concerns and economic disruption. However, the advantages of automobiles eventually became undeniable, leading to new industries and job creation.

      This teaches that natural resistance can be overcome by highlighting the long-term benefits of new technologies. Leaders must acknowledge these fears while showcasing the positive impacts of technological advancements. For example, the initial resistance to automobiles was eventually overcome as cars proved to be more efficient and reliable than horse-drawn carriages, creating new industries and job opportunities in car manufacturing, repair services, and fuel production. This historical perspective can help leaders today manage resistance to AI and other emerging technologies by emphasising their potential benefits and opportunities for growth.

      Innovation through leadership: an automotive industry example

      In the 1940s, a visionary automotive industry leader promoted planned obsolescence, encouraging continuous innovation. By dividing the company into teams tasked with creating new models, the leader rewired the organisational mindset to embrace change and growth. This approach kept the company ahead of the curve and fostered a constant improvement culture. This example illustrates the importance of leadership in driving organisational change and promoting a culture that values creativity and innovation.

      Employees who were initially focused on perfecting a single model had to adapt to a culture of continuous improvement and innovation, requiring not only technical skills but also a willingness to embrace change and think creatively.

      The Asch Conformity experiment and its implications

      The Asch conformity experiment, conducted by Solomon Asch in the 1950s, is a classic study in social psychology demonstrating conformity’s power in group settings. In the experiment, participants were shown a line and then asked to select which of three other lines matched the original line in length. However, among the participants, there were confederates (people planted by the experimenter) who were instructed to give incorrect answers.

      The key finding was that when the confederates unanimously gave the wrong answer, the actual participants often conformed to this incorrect majority opinion despite their perceptions. The experiment revealed that social pressure could lead individuals to conform to group opinions even when they knew these opinions were wrong.

      This significant phenomenon highlights how group dynamics can influence individual behaviour and decision-making. This finding is particularly relevant in the context of technology and AI.

      For instance, if most people believe that AI will lead to job displacement, even those who understand that AI could create new opportunities might conform to the negative majority view. This can hinder the adoption and positive development of AI technologies.

      As leaders and educators, it is crucial to be aware of this conformity bias and actively promote independent thinking and evidence-based decision-making. By fostering an environment where diverse opinions are valued and critical thinking is encouraged, we can mitigate the adverse effects of conformity and support more informed and balanced discussions about technology and its impact on society.

      The importance of optimism in technology leadership

      Technology leaders must have an optimistic view of the future to inspire their teams and drive innovation. Optimism allows leaders to envision a positive future where technology can solve complex problems and improve lives. For example, AI can provide remote access to quality healthcare and personalised education for children with developmental challenges.

      By highlighting these positive impacts, leaders can counteract fears and encourage the adoption of new technologies, fostering a culture of hope and resilience. Optimism also helps to overcome resistance to change by providing a balanced view of the potential benefits and opportunities that technology can bring.

      For instance, AI-driven solutions such as telemedicine apps and diagnostic tools can significantly improve access to medical care in underserved areas, demonstrating how technology can bridge gaps and enhance quality of life. Similarly, in education, AI can provide personalised learning experiences for students with diverse needs, helping to bridge educational gaps and promote equal opportunities.

      The role of human creativity and collaboration in technological advancements

      Human creativity and collaboration are fundamental to technological advancements, serving as the driving forces behind innovation and problem-solving. While AI and other technologies can enhance our capabilities, the unique human traits of creativity and collaboration ultimately shape and direct these advancements. Human ingenuity has been at the core of major technological breakthroughs.

      For example, sustainable agriculture practices, such as crop rotation, were developed through a deep understanding of natural processes and farmer collaboration. These practices have improved soil health and agricultural productivity, demonstrating how creative solutions can address complex environmental challenges.

      Similarly, the development of renewable energy technologies, like solar panels and wind turbines, has been driven by human creativity and a collective effort to find sustainable alternatives to fossil fuels. These innovations require multidisciplinary collaboration, bringing together engineering, environmental science, and policy-making experts to create viable solutions for a greener future.

      In AI, human creativity is essential for designing algorithms and models that can solve specific problems. For instance, AI-driven waste management systems can predict waste generation patterns and optimise collection routes, reducing environmental impact and improving efficiency. This requires collaboration between data scientists, urban planners, and waste management professionals to ensure the technology meets real-world needs.

      Furthermore, human creativity and collaboration are crucial for addressing ethical and social challenges associated with technological advancements. As AI becomes more integrated into various aspects of society, it is essential to consider the ethical implications and ensure that technology is used responsibly.

      This involves diverse stakeholders, including ethicists, policymakers, and community representatives, working together to establish guidelines and best practices. A key aspect of fostering creativity and collaboration is creating an environment that encourages open communication, diversity of thought, and interdisciplinary cooperation.

      By valuing different perspectives and promoting teamwork, organisations can harness their members’ collective intelligence and creativity to drive innovation. Leaders play a critical role in cultivating this environment by setting a clear vision, providing resources and support, and recognising and rewarding creative contributions.

      Amnah Ajmal’s, Executive VP of Market Development, Eastern Europe, Middle East and Africa, Mastercard

      Addressing algorithmic bias in AI systems

      Algorithmic bias occurs when AI systems produce biased or unfair outcomes due to the data used to train them or how the algorithms are designed. This bias can result from various factors, including historical inequalities in the training data, flawed data collection processes, and the subjective decisions made during algorithm development.

      Algorithmic bias can manifest in several ways, such as reinforcing stereotypes, discriminating against certain groups, or providing unequal access to opportunities and services. For example, an AI hiring tool trained on biased data might favour specific demographics over others, perpetuating existing biases in the job market.

      Similarly, facial recognition systems have been shown to have higher error rates for people with darker skin tones, leading to potential discrimination in law enforcement and other applications. Addressing algorithmic bias requires a multi-faceted approach that includes transparency, accountability, and ongoing monitoring.

      One crucial step is to ensure transparency in AI development. This involves documenting the data sources, methodologies, and decision-making processes used to create AI models. By making this information accessible, stakeholders can better understand the potential sources of bias and work towards mitigating them. Another essential strategy is implementing algorithmic hygiene, which involves regularly testing and auditing AI systems for biases.

      This process includes analysing the outputs of AI models to identify and correct any discriminatory patterns. It also involves updating training data and algorithms as new information and insights become available, ensuring that AI systems evolve more fairly and accurately over time.

      Collaboration between various stakeholders is essential to effectively addressing algorithmic bias. The public and private sectors, academic institutions, and advocacy groups must collaborate to develop and enforce standards for fairness and accountability in AI.

      This includes creating regulatory frameworks that mandate the ethical use of AI and incentivize organisations to prioritise fairness in their AI systems. One proposed solution is algorithmic hygiene, which involves thoroughly examining AI outputs to ensure they are free from biases before deployment.

       This requires significant effort and collaboration between AI developers, ethicists, and regulators to establish best practices and guidelines for responsible AI use. Additionally, promoting diversity within AI development teams can help mitigate bias.

      A diverse team brings a variety of perspectives and experiences, which can help identify and address potential biases that a more homogenous group might overlook. Ensuring that AI development includes voices from different backgrounds and communities can lead to more inclusive and equitable AI systems.

      Guiding principles for AI developers

      The guiding principle for AI developers should be prioritising human well-being and ethical considerations in all aspects of AI development and deployment. This principle, often called “human-centred AI,” ensures that technology serves humanity’s best interests and aligns with societal values and norms. At the core of this principle is the recognition that AI technologies profoundly impact individuals, communities, and society. Therefore, AI developers must take responsibility for the consequences of their creations and strive to minimise potential harms while maximising benefits.

      One of the critical aspects of human-centred AI is transparency. AI developers should be open about how their algorithms work, the data they use, and their models’ potential biases and limitations. This transparency builds trust and allows users and stakeholders to understand and scrutinise AI systems, ensuring they are used ethically and responsibly. Another critical aspect is fairness.

      AI systems should be designed and tested to avoid perpetuating or amplifying existing biases and inequalities. This involves using diverse and representative datasets, regularly auditing AI outputs for fairness, and seeking to mitigate any identified biases. Ensuring fairness also means involving diverse perspectives in AI development, including voices from different demographics, backgrounds, and disciplines.

      Accountability is also crucial. AI developers should be accountable for the outcomes of their technologies and take proactive steps to address any negative impacts. This includes setting up mechanisms for feedback and redress, where users can report issues and seek resolutions. It also involves adhering to ethical guidelines and standards at the organisational and industry levels. Privacy is another fundamental consideration. AI developers must ensure their systems protect users’ data and respect privacy rights.

      This involves implementing robust data security measures, minimising data collection to only what is necessary, and being transparent about how data is used and stored. Additionally, AI systems should be designed to enhance human capabilities and empower users. Rather than replacing humans, AI should augment human skills, providing tools and insights that help people make better decisions, solve complex problems, and improve their quality of life.

      Lastly, AI developers should consider the long-term impacts of their technologies on society and the environment. This involves thinking beyond immediate commercial gains and considering the broader implications of AI deployment. Sustainable and ethical AI development requires a commitment to the well-being of current and future generations.

      In conclusion, the guiding principle for AI developers should be to prioritise human well-being, ensuring that AI technologies are transparent, fair, accountable, privacy-respecting, empowering, and sustainable. By adhering to these principles, AI developers can create technologies that benefit society, promote trust, and contribute to a positive and inclusive future.

      AI’s potential for job creation and societal inclusion

      AI has the potential to significantly contribute to job creation and societal inclusion by driving innovation, improving efficiency, and creating new economic opportunities across various sectors. While there are concerns about AI displacing specific jobs, it is essential to recognise that AI can generate new employment types and enhance existing roles. One of the primary ways AI contributes to job creation is by developing new industries and services. For example, the rise of AI-driven technologies has led to the emergence of roles such as data scientists, AI specialists, and machine learning engineers.

      These positions require specialised skills and knowledge, creating demand for training and education programs that equip individuals with the necessary expertise. Additionally, AI can enhance productivity and efficiency in traditional industries, leading to economic growth and new jobs. For instance, in manufacturing, AI-powered automation can streamline production processes, reducing costs and increasing output. This can result in expanding manufacturing operations and creating more jobs in maintenance, quality control, and logistics areas.

      AI can improve diagnostic accuracy and patient care in healthcare, enabling medical professionals to focus on more complex and value-added tasks. AI-powered tools can assist doctors in analysing medical images, predicting patient outcomes, and personalising treatment plans.

      This enhances the quality of care and creates opportunities for new roles in healthcare technology and data analysis. Furthermore, AI can play a crucial role in societal inclusion by providing access to services and opportunities previously unavailable to specific groups.

      For example, AI-driven educational tools can offer personalised learning experiences for students with diverse needs, helping to bridge academic gaps and promote equal opportunities. In rural and underserved areas, AI-powered telemedicine can provide access to healthcare services, allowing individuals to receive medical consultations and advice remotely.

      This can significantly improve health outcomes and reduce disparities in healthcare access. AI can also support inclusion in the workplace by enabling more flexible and accessible working conditions. AI-driven tools can assist individuals with disabilities in performing their jobs more effectively, whether through speech recognition software, automated task assistance, or adaptive technologies. By making workplaces more inclusive, AI can help tap into a broader talent pool and promote diversity. To maximise the positive impact of AI on job creation and societal inclusion, it is essential to address the challenges associated with AI adoption.

      This includes providing education and training programs to help workers transition to new roles, ensuring fair and ethical AI practices, and fostering collaboration between the public and private sectors. Policymakers, educators, and industry leaders must work together to create an environment where AI can thrive while minimising potential negative impacts.

      In conclusion, AI has the potential to drive job creation and promote societal inclusion by fostering innovation, enhancing productivity, and providing access to new opportunities. By addressing the challenges and working collaboratively, we can harness the power of AI to create a more inclusive and prosperous future for all.