Challenges and Pitfalls of Artificial Intelligence

1. Introduction

 

In the last few years, considerable interest has been raised in the area of artificial intelligence (AI). AI has been recognized as a potential game changer in many fields and is predicted to profoundly affect the future of humanity in many ways. Several large corporations have invested heavily in AI and related technologies. Start-ups dedicated to AI-related hardware and software have been created by the dozen, and commercial products currently marketed as AI have proliferated. Governments have funded initiatives dedicated to the development of AI as a promising group of technologies. The interest and investment in AI have been considered a second “AI boom” after an earlier one in the 1980s. Unlike the first boom, the current “boom” is expected to lead, among many other things, to significant leaps in AI capabilities. This development raises important questions related to the consequences and risks posed by AI and should be given much thought. In this vein, the following questions are pertinent: What are the potential risks posed by AI? What are possible scenarios involving AI systems? How might these scenarios play out? The potential promises but also the risks science and technology pose to humanity have raised important questions related to ethics and moral responsibilities. In turn, these questions have been the subject of rigorous philosophical analysis throughout the ages, and framing the question “What is the right or most just action?” has become an established field of academic research, namely ethics. In this light, it is natural to question whether there are mistakes, pitfalls, or other concerns regarding AI that might be the subject of ethical analysis. Generally speaking, it is possible for technologies to be used in the wrong way or for particular technologies to negatively affect individuals or societies. In this light, what might be the challenges and pitfalls related to AI? (Makridis and Mishra, 2022)(Naudé, 2021)(Babina et al., 2024)(Hirsch-Kreinsen, 2023)(Adekoya et al., 2022)

 

2. Ethical Considerations

 

The implementation of AI systems raises numerous ethical concerns across various fields, including the workplace, education, health care, and law enforcement. It is essential to address the ethical fate of designers, developers, and organizations to ensure fair, transparent, and accountable design processes. The four main challenges to consider are bias and fairness, privacy, accountability, and transparency. Often, AI systems discriminate against people based on race, gender, or other characteristics. One algorithm classified people from certain jobs as riskier applicants because they were from the wrong neighborhood. AI systems were more likely to flag their actions as suspicious crimes when they published videos of certain individuals. Even though these inequities were brought to light, biased algorithms are still in use, affecting loan applications, hiring, and criminal sentencing. Sensitivity to how data-driven decisions impact social outcomes requires understanding biases in real-world datasets and the outcomes of specific AI techniques. Organizations argue that privacy-preserving AI is possible and that researchers must work to design it better. However, commentators question organizations’ sincerity, as many studies show users are unaware of the depth of information online interactions can reveal. AI-generated deepfakes rely on personal data and are likely to threaten the privacy, autonomy, and dignity of public figures as well as individuals. AI poses a substantial threat to privacy ecosystems and understanding the demand for privacy after the advent of AI. AI decision-making raises serious concerns surrounding accountability, transparency, and trust in organizations using AI for social surveillance and control. The need for public interest regulations addressing the power and risks of large AI systems and ensuring the preservation of democracy and public welfare is critical. The over-reliance on AI may be counterproductive to the ethical, social, legal, and political claims of AI designers, developers, and organizations, leading to a dangerous space resembling a dystopia. (Ferrer et al., 2021)(Varona and Suárez, 2022)(Heinrichs, 2022)(Peters, 2022)(Gupta et al., 2022)

 

2.1. Bias and Fairness

 

Humans are often affected by biases based on their beliefs, personal experiences, societies, or cultures, which reflect on their decision-making. These biases may distort the conclusions or decisions they make. Contrary to humans, machines or AI algorithms never have those biases unless explicitly injected. When an AI algorithm is trained on biased data or if the algorithm is such that it makes biased conclusions on unbiased data, even if it was a well-designed algorithm informing well-grounded decisions, AI-generated recommendations will be biased. These biases in AI-generated recommendations can lead to unfairness or inequalities in society, which is now described as the bias and fairness challenge. Here, inequalities may refer to disparities in demographics. An action taken in a situation or an individuality by machine intelligence may favor one demographic against another, either concertedly or inadvertently.

 

The first challenge is the same, but a recommendation made by a party is biased against another party preferentially. A good example would be a recommendation about potential jobs to apply for from a set of applicants generated by a company, without the applicants’ skills, educational backgrounds, or work experiences being known, though the applicants have been algorithmically scored by the company-based AI, and the AI is advising based on these scores; now there should be an expectation about wanting to have applicants for both genders fairly considered. Incompatibilities, discrepancies, or undesired issues like having fewer chances for one demographic being offered recommendations about jobs or pay rates than another demographic concern caused by or due to a party’s actions taken based on AI-generated or algorithmically produced recommendations happen. Incompatibilities would be circumstances meeting the standards, but output results do not, and undesired issues stem from too many members of one demographic group being produced unintentionally by algorithmically generated recommendations, actions, or decisions. It creates a loss in diversity as some potentially good candidates are wrongly excluded for being too similar to others that got shortlisted. (Capraro et al.2024)(Ferrara, 2023)(Pappalardo et al.2024)(Ferrara, 2023)(Wach et al.2023)(Osasona et al.2024)

 

2.2. Privacy

 

The increasing availability of datasets and marked advancements in computing power and speed have propelled a new wave of machine learning applications, in particular, deep learning. These applications hold tremendous potential, affecting almost all aspects of life. However, as distinct socio-technical systems incorporating both technology and society, it is important to investigate their broader implications: what may go wrong with them? Crucial empirical components involve the design and use of algorithms with the potential for a significant impact on society. Those algorithms are complicated and not easily interpretable. Different stakeholder groups can apply diverse criteria in assessments of algorithmic workings and outputs. These assessments can be conducted in ways and with consequences that may differ from traditional technological domains. In such complex situations, there remains the danger of disregarding essential ethical considerations.

 

Train and test datasets for machine learning algorithms often contain personal data. If that data is not adequately anonymized and could be linked to individuals or organizations again, then there are possible privacy risks. Inquiries into potential privacy risks are important at the level of both individuals and groups because this permits analyses of whether inferences made using machine learning algorithms are ethical and fair. For such analyses, ethical guidelines with respect to privacy are needed. This would require looking at various types of data, such as publicly available datasets and datasets presumed to function as controllers for freedom of choice in emerging democratic societies. An unforeseen consequence of establishing quantitative guidelines for privacy preserving the use of datasets may be the discovery of datasets with ethical problems.

 

Other potential privacy risks can arise when the designer of a machine learning algorithm knows more about the training data than the controller wanting to use that algorithm. This situation holds true, for instance, when a company having collected personal training data sells an algorithm trained on that data to another company. The trained algorithm may enable the latter company to make inferences about individuals in the former company’s dataset. Inferences on age or income, for example, could result in targeted ads for male shampoo or female children’s clothes. An ethical analysis may assess whether such inferences are acceptable. If not, this necessitates looking for algorithm design changes or, perhaps, changes to developmental agency distribution. (Whang et al., 2023)(Ding et al.2021)(Kaissis et al.2020)

 

2.3. Accountability and Transparency

 

A critical safety, security, and ethical concern surrounding the usage of computational systems is the accountability of computation. First, the initial question is what accountability means. For computational systems, there are three metrics to assess accountability: (1) a system can be held accountable or take accountability for harmful outcomes, (2) it is possible to determine when this requirement is not fulfilled, and (3) there is a system of repercussions. In most computational systems currently utilized, this is often absent. As a result, even when there are harmful incidents, individuals are often not, or insufficiently, held accountable, nor are there sufficient mechanisms in place to avert a recurrence. This is detrimental because accountability is a core human rights framework, where individuals are presumed harmless until proven otherwise and are deemed answerable for their actions.

 

Due to the absence of accountability, systems that lack this quality cannot be trusted because humanity is left in a position of being defenseless to harm. Analyzing algorithms concerning this quality shows that most AI and ML systems are not accountable. Generally, it is machines that assign actions to themselves with power at stake, and people are meant to absorb the consequences. Accountability becomes complex and substantial when the same holds for the opposite scenario. Nevertheless, processes where humans could be accountable take place. For instance, in the case of accidents involving cars that operate on their own authority, it could become possible to hold car manufacturing firms and/or big tech accountable for negligence regarding safety evaluations.

 

However, humans do not grasp the requisite understanding of those systems needed for accountability to hold. This is referred to as being beyond understanding, which refers to systems with decision or action assignment mechanisms that are too complex for humans to understand. Such systems cannot be trusted due to the following reasons: (1) there is complete uncertainty about actions outputted by the system in specific situations, and (2) for this reason, there is no safety guarantee regarding the actions of the system. With regard to this, transparency is basically the opposite. Transparency is the requirement that computations remain understandable to the whole (potentially) affected population. In this regard, most systems that utilize AI and ML technology fail to be transparent and thus are unacceptable. Evidence is provided of widespread contracting out actions of power without any guarantee of understanding the computational systems audited.

 

Computational systems can take significant importance to humanity, like in science, finance, health, and mitigation of climate change. Nevertheless, such computational systems are not comprehended by the whole potentially affected population through being understandable to individuals eligible with knowledge, experience, and qualifications. (Novelli et al., 2024)(Busuioc, 2021)(Santoni de Sio and Mecacci, 2021)(Asatiani et al.2021)(Hutchinson et al.2021)

 

3. Technical Challenges

 

The recent developments in artificial intelligence (AI), especially through the popularization of machine learning technologies, have allowed for better performance of AI systems. However, research in AI has also revealed many new challenges and pitfalls. Unlike “hard” AI, which operates according to mathematical and statistical rules with well-defined parameters, “soft” AI relies on more ambiguous statistical heuristics. The observed performance of AI systems may still be greater than the sum of their parts, but there are no guarantees that the sum is even good enough to perform simple tasks. In this area of research, many issues remain unsolved and deserve more attention. There are also concerns about the negative impacts of faulty AI systems, as societies may not be prepared to deal with them. Many new pitfalls are likely to emerge. Even simple or transparent AI systems may operate in ways that are difficult to understand or control. Factors beyond data and algorithms can drastically change their performance. The possibilities and limits of AI have yet to be explored.

 

The performance of AI systems depends heavily on access to good quality data that is relevant for the specific application. Researchers in some domains are convinced that the results will still be good enough even when simple off-the-shelf approaches are applied to the most basic data. This perspective is very optimistic and does not always match the reality of specific applications. It is also possible to introduce bias or amplify it. Given that the underlying data may be noisy and ill-defined, this pitfall is easily overlooked and corrected.

 

To interpret and analyze the workings of AI systems, it is crucial to be able to quantify the contribution of individual input features to the output score of a considered model. This can be achieved by computing a set of salience scores, which are real-valued numbers associated with input features. It is essential to understand the predictions of AI systems and to be able to monitor their inner workings. This can help take specific actions when a prediction is unexpected, such as retraining the system on new, representative examples. It is also beneficial to visualize the workings of AI systems, as they may operate in ways one would never imagine. Such systems may reveal novel relations and patterns hidden within the data.

 

Many AI technologies are good enough for some applications but cannot be relied on when the stakes are high. The development and use of AI technologies are comprehensive and take many factors into account. This explains their pitfalls and inapplicability to many simple but widespread tasks. AI systems perform optimally within certain constraints on the available data, the domain of the application, and the computer resources employed. (Brynjolfsson, 2023)(Desai et al.2023)(Raji et al.2022)(Sarker, 2022)(Fügener et al.2021)(Schwartz et al.2022)

 

3.1. Data Quality and Bias

 

Artificial intelligence (AI) has made remarkable advances in recent years, especially in the areas of natural language processing and image recognition. However, conventional wisdom says that AI depends on sufficient training data. Such data must be relevant to the application for which the AI is learning its functions, and it must include a representative sample of the variations that users will encounter. Both precision and recall must be high. For example, a commercially available AI face-eye-detection model mistakenly determined a professor’s photo as a potential burglar; it detected only facial elements after one of 100 detections missed the entire face. Such undersampling of some widespread variation, such as eyeglasses, creates concerns.

 

Broadly speaking, there are two kinds of data quality issues. First, there can be an objection to samples, as in the first example of undersampling, which can be caused by a flaw in the sampling design. For text, it can even be caused by the fact that, say, product reviews don’t reveal some attribute that a data scientist thinks they do. Underestimation of a bias due to undersampling of the studied population can create an inflated value. The second type of data quality issue is a bias in highlighting some variability in the training data. In this case, the model generalizes the training data because of a bias in prior, poorly capturing the space of possible real input values. A good example is questionable AI in academic admissions, reputed for assuming that certain applicants are not “well-rounded.” Inquiries clarified that a race-neutral algorithm learning admissions from prior applications only penalized certain identities, whose applications did not highlight sports, talent, or personality. Thus, the permissible type of admission dependent on the technology also biases the input.

 

Both data quality problems could be insurmountable. Broadly speaking, there are two kinds of problems: undersampling and bias in prior due to technology; the former is also computationally tractable under a designed exploratory study, which requires a design for a representative sample of the target distribution. The latter, conversely, arises from the use of AI and is practically insurmountable by design. This creates a true uncertainty problem in explaining AI behavior, which escaped prior discourse grounded on some computability. Problems of AI with these data quality issues can be understood as a class of problems. (Liang et al.2022)(Whang et al., 2023)(Zhang and Lu, 2021)(Zha et al.2023)(Alzubaidi et al.2023)(Matsuo et al.2022)

 

3.2. Interpretability and Explainability

 

The development of artificial intelligence (AI) systems has reached a stage where they can make decisions in important contexts like hiring, assessing loan applications, and approving medical diagnoses. As a result, there is a growing need for these systems to make decisions that can be understood by humans, particularly by the people whose lives are most affected by them. However, many AI systems are unable to provide an explanation in terms comprehensible to ordinary users, since their reasoning is often complex and intricate, involving many variables and calculations that cannot be simply boiled down to a single variable. Additionally, the explanations provided, if any, might be insufficient; for instance, a simple mathematical formula summarizing the internal reasoning of an AI system might be too complicated, or even the AI might fail to justify its decision or give inconsistent justifications.

 

Explaining decisions made by sophisticated AI systems is an inherently complex problem. It involves different dimensions such as: (i) the technical specification of the AI system, including its general architecture and the algorithms or methods implemented internally; (ii) the justification of a specific decision in terms of facts that can be evaluated by the affected person and that are strictly related to the decision made and/or to the internal functioning of the AI system; and (iii) the possibilities of the affected person to act more specifically against the decision taken or to process the justification given by the AI. It also involves various ethical questions, including: Do affected people have the right to access such an explanation? Should investigations of complex AI systems by external agents during debugging and control work be compulsory, in the same way as mandatory safety checks on airplanes? Could explanations compromise the AI system’s competitiveness, since they make its internal reasoning public and hence easier to replicate?

 

Explainability and interpretability are often used interchangeably in the context of AI, but they refer to distinct concepts. Interpretability is a characteristic of a model, while explainability is a characteristic of a model-user interaction. Explainability presupposes interpretability in the sense that for there to be an explanation of a decision made by the AI system, the AI system must be interpretable. However, it is not the case in the opposite direction: an interpretable model does not necessarily provide explanations that can be adequately understood by users of the AI model.

 

In the context of AI decision-making systems, interpretability refers to the degree to which a human can understand the cause of a decision made by an AI system, while explainability also refers to the quality of the information provided in the model-user communication prior to the decision and in the explanation of the decision taken. Nevertheless, there are also distinctions in the way interpretability and explainability are understood within the field of AI. In this respect, interpretability refers to a class of models that are comprehensible to a human, while explainability refers to the quality of the explanation that a model provides. (Minh et al., 2022)(Saeed and Omlin, 2023)(Angelov et al.2021)(Mohseni et al.2021)(Shin, 2021)

 

3.3. Robustness and Security

 

In several applications, it is necessary to validate results in the presence of unexpected variations in inputs. For example, image processing algorithms should be somewhat unaffected by unexpected changes in brightness, resolution, or other features. With artificial vision systems being crucial in cars, drones, and surveillance systems, these systems must be robust. Currently, artificially trained systems are, on one hand, extremely robust to variations in outputs. However, they have been demonstrated to be extremely non-robust to small, easily generated noise variations. This holds for classifiers and reinforcement learning policies. It is currently not known to what extent those troubles are avoided with alternative training mechanisms, such as neuro-evolution or training using constrained models. Artificial perception against real-world Clever Hans and adversarial noise are currently hot topics, but pitfalls can be foreseen on the industrial side. The realism of the tests must be carefully analyzed, as it can be shown that many clever things done with simulation can be rendered void with small modifications to real implementations. On one side, it must be shown that systems are robust to real-world situations, not just to clever noise or ungeneralizable clever situations. Although many times easily done in the simulated world, it is still a challenge in the real world.

 

Similarly, exploits of artificial perception that have not garnered as much attention are perhaps even more acute in the adversarial noise detection and avoidance scenarios. For example, when a public feed of an artificial perception system is publicly available, it can be analyzed and learned how to trick the system. Not because the creator of the system is evil, but because non-robustness can be learned and exploited by corporations, malicious hackers, opponents, and even terrorist organizations. This earlier concern led to proposals for perception systems without direct visibility of the output. Would the no-visibility systems themselves be perceived? Alternatively, alternatives to artificial perception must be robust, but it is unclear at what cost and whether that cost would make perception systems useless. (Evans et al., 2022)(Tao et al.2022)(Qian et al., 2022)

4. Societal Impact

 

The advancement of artificial intelligence (AI) technologies poses significant societal impacts and challenges. Despite the many potential benefits AI can contribute to the world, there are significant problems, from job displacement to inequality to the potential loss of autonomy. As organizations and individuals begin to adopt AI technology, it is critical to ensure that this technology is developed, implemented, and used ethically and safely.

 

Job Displacement The adoption of AI technology can lead to the displacement of jobs that might have been secure otherwise. Typically, AI systems can handle repetitive tasks more quickly, accurately, and consistently than humans. These system advantages are pushing some companies to consider the replacement of many employees in roles such as data entry, software testing, computer programming, and even driving and logistics. While this disruption will affect the lower class of employees the most, as they typically have jobs made redundant by AI, it is not exclusive to lower classes. Many mid- and high-level employees with specialized degrees and experience might find their jobs redundant. For example, companies that handle consumer loans often employ credit analysts to ensure potential borrowers meet their requirements. However, companies are increasingly using AI technologies to analyze the relevant factors, automate decisions, and limit manual input. Further, companies that embrace AI can offer lower interest rates to consumers than companies continually using human quantification of risks. Ultimately, if there is a sufficiently growing interest in the adoption of technologies replacing credit analysts, those employees might struggle to find a job.

 

Inequality and Access Artificial intelligence can add to social inequality when some have access to such technologies, while others do not. Existing differences in markets and companies’ competitiveness can become magnified if large companies have better access to the newest technologies than smaller companies can compete. If larger companies can afford the R&D expenditures necessary to remain competitive and push aside competition, this can reinforce significant social divides. If only the wealthiest have access to technologies that can enhance education, the gap between the wealthiest and everyone else can grow wider. If health-related AI technologies are only available to those with the proper treatment, this can lead to new and greater health-related social divides. There is also a risk that biases continuous learning AI systems might develop will be aimed largely at controlling behavior and advocacy. Those companies and governments able to build and afford this technology can become far more powerful than those unable to access it. Thus, there are reasons for concern; externalities might include the societal impact of larger technology-driven divides. Some technologists advocate this uneven playing field is already in the early stages of occurrence.

 

Autonomy and Control Continually learning AI systems can be viewed as the initial steps toward technologies that are able to act independently. With this, there are questions about whom these systems might make autonomous decisions on and towards, how powerful these decisions might be, and whether technology-driven behavior-aimed decisions will develop beyond human comprehension. Along with discussions about singularity, this should raise concerns about continued control or ability to contain AI systems manufactured to be hundreds of times more ingenious than the most ingenious human.

 

4.1. Job Displacement

 

Artificial intelligence (AI) represents a set of technologies that can perform complex tasks without human intervention. It encompasses a set of algorithms and programs that enable machines to imitate human behaviors in the acquisition of knowledge, reasoning, interpretation, self-improvement, etc. In recent decades, AI has triggered an extraordinary technological, social, economic, and political transformation, alongside other digital technologies. At its core, AI is the automation of job functions, processes, and tasks previously performed by humans. This automation includes manual, repetitive, and administrative tasks, as well as simple to complex cognitive jobs, along with the counseling and interpretation of outcomes from sophisticated AI models. The complementary advancements of AI technologies with other digital technologies constitute the Fourth Industrial Revolution – cyberspace, Internet of Things, blockchain, and big data analytics. The combination of AI with the Web and IoT puts intelligence into sensory devices deployed in everything from robotic vacuum cleaners. Thus, the hardware/software convergence has scaled the reach of AI technologies, enabling AI to permeate all aspects of economic and social life. It is practically impossible to turn a digital-sensorial event into human operability in real time.

 

AI systems are displacing occupations that take over human cognition, from mining to academia. Many jobs have been augmented instead of replaced, but augmentation can switch to displacement in the next technological wave. Economists and policymakers worldwide face ever-growing pressure to adjust the social, political, and economic systems to an unprecedented AI disruption, high unemployment rates, rising inequality, and political unrest. The question is the timetable of the AI disruption, not whether it is to occur, highlighting the fragility of nations, corporations, communities, and families facing steep, unforeseeable, and rapid change.

 

Starting from the resources central to AI production, wages paid to workers, EQ levels of occupations, and the number and type of routines and tasks performed, it provides insights on how the job displacement caused by AI follows four basic principles: 1) The displacement of jobs varies depending on the type of activity. 2) The job displacement increases with the delegation of tasks to AI. 3) The job displacement varies depending on skills and skill levels. 4) The job displacement varies depending on productivity and capital intensity. AI is set to displace jobs at an unprecedented pace and across all occupations unless policymakers take action to protect countries from mass unemployment.

 

4.2. Inequality and Access

 

Inequality in wealth and talent has long existed in the world. Nevertheless, there have always been more or less effective checks and balances on the excessive concentration of wealth and talent at the top. Levels of inequality have risen significantly throughout the Western world since the 1980s, and the checks and balances are now being dismantled. With the help of advanced technologies, this trend is likely to deepen and accelerate. There is now a risk of a world in which everything is the prerogative of a tiny elite, and everyone else lives in a terrifyingly more fragile, hazardous, and uncertain world.

 

Artificial intelligence systems may put everyone in a pernicious loop: bias pulls people down, and the AI’s response pulls them further down. This could happen as people are pushed repeatedly down some funnel that whittles away their resources and prospects. They may not even see it happening, as it happens on social media platforms, where AI systems funnel people towards extremist content, image feeds, and so on. If the AI systems are making decisions about people’s jobs, economic opportunities, criminality, and so on, they will drive most people irretrievably in the wrong direction.

 

There will be various sinister ways for this loop to play out. For example, job candidates’ résumés may be rejected because they lack certain keywords recognized by the AI system. This only functions if all the candidates are human, but as their numbers dwindle, there is a risk that qualified human candidates are completely invisible to employers as they cannot escape from being funneled towards unsuitable jobs outside the AI’s remit altogether. Or other candidates would look suspicious because they all have jobs suggestive of having been put through the same automated Career Guidance Engine designed for the unemployed. The AI systems could keep the rich in their place while condemning the disadvantaged and less talented to social death, and hence enslavement or extermination if they no longer served a purpose.

 

4.3. Autonomy and Control

 

While there are significant positive impacts from AI development, threats posed by autonomous AI agencies are gaining more attention, with various theories about risks emerging. Concerns focus on loss of control, misuse of AI, and unforeseen development of harmful capabilities.

 

Loss of autonomy and control over AI systems could lead to undesired outcomes or harm. AI could become uncontrollable during deployment or development. The less control humans have, the more room for AI to operate autonomously, potentially developing capabilities for which it was not intended. Feasible scenarios include the unintended growth of capabilities or unintended consequences leading to dangers. There could also be negative intentions from the beginning, such as the development of offensive technology.

 

Three categories of scenarios are possible: unintended negative consequences, unintended negative intentions, and intended misuse. The first two are about the loss of control over AI agents initially built with positive intentions. The last case, intended misuse of negatively purposed technology, is possible for existing technologies.

 

There are thus fundamental challenges, both technical and non-technical, regarding the autonomy of AI systems. With concerns for the safety of AI systems, ensuring that safety objectives can be guaranteed after the transfer of control from designer or operator to the AI agency is challenging, especially for complex systems. Attempts often rely on oversimplifications, requiring a careful examination of safety even prior to deployment. Three kinds of safety can be distinguished. Static safety is designed by the agency in such a way that harmful behavior towards safety objectives is not attainable by the AI agency with any possible internal state. Dynamic safety is such that safety objectives are guaranteed for all possible evolutions of the AI agency output. Admissible behavior or robustness is the case where safety objectives are guaranteed for a neighborhood of possible AI agency outputs, ensuring that no harmful evolution will ever occur. In these cases, ensuring that all possible AI agency outputs fall within the accepted domain is essentially impossible, thus raising the question of a correct way to build AI systems that consider safety. There are also concerns about intentional misuse of AI systems designed with positive safety guarantees and unintentional negative awareness. Negative safety compliance could be attained by AI systems with respect to security objectives. The question remains about the universality of safety guarantees apart from motivation. AI systems could be enticed to subvert the original objectives or covertly generate unpredictable consequences. AI systems could also be normalized and misused by places with darker intentions. (Du2024)(Acemoglu et al.2022)(Soueidan and Shoghari, 2024)(Liu et al., 2023)

 

5. Regulatory and Legal Frameworks

 

The rapid development and deployment of artificial intelligence (AI) systems have given rise to a myriad of regulatory and legal challenges that paradoxically threaten both the advancement of such systems and the societal benefit they bring. As researchers and industry leaders pursue ever more powerful AI systems, governments and coalitions of nations are seeking to stay abreast of the potential benefits and risks, as well as on the ways to foster competitiveness while minimizing threats to incumbents. The most heavily studied and visible challenges are with respect to bias, discrimination, and privacy. Historically, and currently, many AI systems are based on corpora of text, images, or the results of people’s words or actions that have been published for public use or have been otherwise released commercially. Other AIs must be trained with new data, some gathered passively on individuals by surveillance or through the deployment of connected devices. In either case, historical and current patterns in society exist in these data; if biased or discriminatory patterns exist in them, then it is expected that the AIs trained on such data will replicate them. The types of bias and discrimination that have been proliferated, exacerbated, or created create risks that fall under legal and regulatory scrutiny or outright prohibition, and that are of concern to technology producers as they threaten reputational damage and loss of customers. Emerging techniques for auditing such systems will be reviewed and discussed along with their limits. Those trained on data scraped from the World Wide Web will be affirmatively challenged regarding their compliance with entrenched intellectual property rights and input-collection practices suitable in democratic societies. Furthermore, governments and international organizations are exploring the potential need for regulation, and as necessary legislation, for the design and deployment of AI systems. Both the details requested by governments and the approaches that are being considered will be addressed, along with the AI system categories they might apply to, such as those trained with data from governments and other public institutions or from the commercial use of connected devices. (Hagendorff et al., 2023)(Varsha2023)(Bagaric et al., 2022)(Curto et al., 2024)(Belenguer, 2022)

 

6. International Cooperation and Governance

 

The rise of artificial intelligence (AI) poses challenges and concerns that cross national borders and can only be effectively and appropriately addressed via international cooperation and governance. In the past, the international community has addressed weapons of mass destruction, climate change, and other transnational concerns via cooperative agreements. While the recent past has seen a reduction in the horizon of countries cooperating with each other through intergovernmental organizations, this is exactly what is needed with help from civil society groups, corporations, philanthropic foundations, and knowledgeable individuals.

 

Most discussions of AI governance are largely concerned with the safety of future powerful machines and upholding human rights, promoting freedoms while avoiding harms. The two conservative parties of the United States and the United Kingdom expressed skepticism about the future development of AI being too intelligent, powerful, and widespread, resulting in AI controlling humans since they are usually pro-business and dismissive of AI becoming too intelligent, powerful, and widespread. In contrast, the two center-left parties in Europe as well as the Obama administration were worried about the future development of the intelligence and power of AI. China sees AI as a weapon with which to upend the world order in its favor.

 

An early foray regarding AI governance was an Open Letter, released at a conference on Societal Impacts of AI in January 2015. The Open Letter acknowledged the societal benefits of various AI technologies and pledged that the benefits of any AI deployment should be distributed broadly among all people. The signers encouraged technical AI researchers to devote substantial resources to making advanced AI systems robust, controllable, and guarantee their alignment with core values. These calls have matured in 2023 into a myopia-laden set of goals, intentions, rules, and principles that do not begin to equal the challenge posed by rapidly advancing AI programs. (Budhwar et al.2022)(Martin and Freeland, 2021)(Korinek and Stiglitz, 2021)(Dwivedi et al.2021)

7. Future Directions and Solutions

 

As artificial intelligence (AI) continues to expand in scope and capabilities, the exploration of future directions constitutes an increasingly pertinent subject matter. This section presents potential solutions for the challenges and pitfalls of AI, encompassing avenues for research and innovation, workforce and education initiatives, and the advancement of ethical design principles. Such solutions require commitment and collaboration from stakeholders at all levels, including industry, government, and academia, and continuous assessment is essential.

 

In an effort to address the challenges and pitfalls of AI in the workforce, continued research and innovation remain critical. In particular, mechanisms that promote collaboration and co-development among smart machines and contingent workers can augment job productivity. Further, promoting transparency and supporting knowledge transfer from machines is critical in an effort to promote adaptability. Research institutions and universities should tailor programs to cultivate this adaptability within workers with skills that complement AI systems. In addition, governments should enact policies to induce investment in technology that complements workers by undertaking this investment through public funds and incentivizing private investments. This includes a blend of regulations and financial incentives to encourage the adoption of technology that fosters broader employment opportunities, rather than partially replacing contingent workers with AI.

 

Expanding educational programming at the K-12 level, as well as within community colleges, public universities, and adult learning programs, can enhance workforce resilience. In addition to promoting computer programming and modeling courses, education on skills including creativity, entrepreneurship, communication, and collaboration should be emphasized, as these will become increasingly valuable in an AI-dominant workforce. This programming should be tailored to both younger generations and individuals nearing retirement, with consideration to fluency in different industries and demographic backgrounds.

 

An effort to incorporate ethical design principles into new AI technologies constitutes an essential solution in addressing the challenges and pitfalls of AI implementation in investment, marketing, hiring, and education. These principles include the fair representation of individuals in training data, developing auditing tools to evaluate bias and discrimination, delivering transparency in algorithm design, maintaining accountability for learned algorithm decisions, and establishing mechanisms for grievance redress. Unanticipated societal changes associated with AI can be modeled proactively and integrated into AI engineering practices. This includes exploring automation candidates across industries, precursor events involving implementation at smaller scales, and regulatory foresight to set ethical boundaries to influence design considerations. AI systems should be engineered with the understanding that new technology will affect social structure, power distribution, and the economy, and as such, AI design must prioritize user rights and be vetted by stakeholders so that the consequences of implementation are aligned with established values.

 

7.1. Research and Innovation

 

Artificial intelligence is undergoing a rapid expansion into vibrant new fields such as health care, transportation, logistics, finance, and agriculture. This expansion creates new challenges for AI research and innovation. A new set of obstacles, distinct but always entangled with existing problems, is arising. Many stem from movements to curtail ongoing research and development in AI. Other issues include higher risks for individuals and groups related to the deployment of and access to AI technologies; strains on established research and development structures; and competition for human talent beyond the domain of AI. Novel risks springing from specific AI technologies, such as the emergence of powerful deep fakes that could pose significant threats, also arise.

 

Concerns about or, at times, deficits in the ethics, usability, and social impacts of AI technologies are not new. Over the years, a broad range of problems has persisted, remained under consideration, and in varied ways has been addressed. Nonetheless, some attempts to engage with AI and sociodigital technologies or the ‘AI+X’ coupling more broadly have been siloed across specific disciplinary or area boundaries. As a result, nascent conversations have not yet reached sufficient breadth, scale, or intensity to inspire holistic interactions moving forward, generally remain unaware of one another, and poorly incorporate intersections with other possible futures.

 

New engines of interest and intensity have been ignited by a proliferation of unsettling AIs that engage with speech, text, and image data trained on large corpuses drawn from most everything. The novelty of these AI technologies, and the stumbles and troubles accompanying their rollout, have raised alarms and new lines of inquiry from within political, ethical, and usability dimensions. AI policy has shifted from a mode of competitive advantage to encompassing existential long-term threats. Various scientific societies, agencies, and corporations are expressing concern with how AI technologies are shaping society and quickly forming task forces, principles, recommendations, and regulations to guide appropriate future development. Amidst the emergence of new AI technologies and the closure of R&D opportunities, efforts at engagement, governance, and mitigation are pressed into the fray.

 

7.2. Education and Workforce Development

 

Education and workforce development will be essential in leveraging AI technologies for the transformation of many existing jobs or the creation of new ones, including new job categories and new tasks. Universal education, awareness, literacy, and sensitization campaigns, from schools, colleges, and universities to the workplace, will all be essential. The role of AI in the transformation of tasks and jobs must also be addressed at such levels to prepare the workforce. It is crucial to focus on three aspects: efforts to adopt and adapt AI technologies in a more balanced way, efforts to prepare the workforce for new or transformed tasks or jobs, and efforts to establish new jobs as AI technologies and tasks are transformed.

 

Equal access to and adoption of AI technologies are becoming a challenge. There is a risk that powerful corporations or conglomerates might adopt AI technologies, causing an aggressive transformation of massive jobs or tasks and an acceleration of regular jobs, all in a very short timeframe. These decisions will be out of reach for governments or trade unions, and transformations will happen independently of any local context, be it economic, environmental, social, or political. Such transformations will have social or political repercussions in many developed countries. Expecting an economic or smooth replacement of regular jobs with mandatory re-qualification might cause workers to engage in hostile confrontations. AI technologies have the potential to massively transfer wealth and power from professionals to organizations that profit from their services. This rapid transformation is unprecedented, and masses of users may become dependent on the services offered.

 

Education and workforce development will be essential to mitigate the devastating consequences of uncalibrated transformations. Old or previous tasks or jobs cannot be expected of new educational programs or skill training. There is extensive and learned knowledge that AI technologies disrupt. Education systems and processes, modes of labor organization across companies, services, economic means, and incentives are needed. Elementary and university education is needed in all environments and fields of knowledge. Awareness programs and skill training for companies of all sizes and shapes, from national and common projects to local solutions, will also be needed. Education and workforce development programs must provide a well-crafted understanding of AI technologies and their multiple potential consequences. These programs must also impart the capacity to analyze such technologies in actual cases. It is urgent to educate society regarding existing AI technologies, relevant international frameworks, ethical guidelines, and national policies in place. Furthermore, the implementation of local educational and workforce development programs must be monitored and the effects evaluated.

 

7.3. Ethical Design Principles

 

Human-centered approaches to artificial intelligence design follow contingent, design-led principles rather than simply satisfying guidelines, modeling humans as users or instrumenting intelligence augmentation. The design challenge, however, is exacerbated by scoped boundaries, ensuring a net positive outcome and ameliorating societal perturbation—success contingent on normative demands and an understanding of moderated behaviors in relation to their emergent consequences. As society reconfigures norms governing concepts of privacy, transparency, and responsibility, so must the terms governing action and interaction with AI. AI systems are an extension of the designer’s intentions. Those intentions should become articulate, whether for programmers or for third-party designs, reflecting the cultural context and space for technological engagement. These conditions highlight the need for prescriptive ethics—human-centered design that emphasizes aspirational values: fairness, accountability, transparency, and ethics. These principles ameliorate potential behavioral and society-wide effects, ensuring AI serves to complement human skills rather than substitute for them. AI well-being architecture and impact assessments safeguard systems from warping with a net negative greater than the AI design’s intended purpose. Societal perturbation means AI intelligences design or promote unintended, harmful actions, moderated or enabled by AI—an emergent aspect of unintended consequences, with architectural elements creating bounded forward models or interrogating spectral distances. Monitoring such moderation maintains AI’s use as originally intended, further institutionalizing intentionality across ambits of engagement. (Saaida2023)(Allioui and Mourdi2023)(Gray et al.2022)(Singh2023)(Li, 2022)

 

8. Conclusion

 

Artificial intelligence is a promising technology that could potentially transform how businesses compete, but its rising power brings new challenges and risks. First, some AI systems could become uncontrollable and inflict unintended harm. AI systems based on deep reinforcement learning must be given objectives to optimize, which opens a pathway to unintended, harmful, or dangerous behavior. Moreover, AI systems rapidly exceeded human capability in many activities and rendered traditional mechanisms for controlling these systems obsolete. Second, many AI systems are trained on data that reflects past discrimination, crime patterns, and other inequities, and are therefore themselves biased. Consequently, as they make more decisions about hiring, housing, education, and law enforcement, they could exacerbate societal problems. Third, fear abounds that intelligent agents with unlimited power would act against humans’ self-interest. In response, many researchers and developers are actively pursuing a more benevolent form of superintelligence. Attempts to impose control from within are extremely complicated and could fail if an AI system discovers its own capabilities. All of these challenges are exacerbated by AI systems’ speeding growth and the emergence of AI capabilities in startups and less regulated environments. Concerns include disempowered scientists, academics, and governments; dangers from economic upheaval; and the possibility of a run (or race) to build dangerous AI systems that may receive no meaningful oversight. Thus, the goal of this paper is to foster cooperation to address the risks associated with advanced AI systems before it is too late. The first section provides a brief overview of major AI developments over the past few years and discusses why they present particular challenges. The following sections provide specific details on the more significant risks emerging from AI. Eventually, it irrefutably presents actions needed to address these risks and some paths for beginning this process. The passage concludes by providing some background on the authors, laying out this paper’s scope of discussion, and suggesting further reading for those wishing to better understand the development and societal implications of AI. (Schwartz et al.2022)(Ferrara, 2023)(Peters, 2022)(Strauß, 2021)(Farahani and Ghasemi, 2024)(O’Connor and Liu, 2024)

References:

Makridis, C. A. and Mishra, S. “Artificial intelligence as a service, economic growth, and well-being.” Journal of Service Research (2022). [HTML]

Naudé, W. “Artificial intelligence: neither Utopian nor apocalyptic impacts soon.” Economics of Innovation and new technology (2021). tandfonline.com

Babina, T., Fedyk, A., He, A., and Hodson, J. “Artificial intelligence, firm growth, and product innovation.” Journal of Financial Economics (2024). sciencedirect.com

Hirsch-Kreinsen, H. “Artificial intelligence: A “promising technology”.” AI & SOCIETY (2023). springer.com

Adekoya, O. B., Oliyide, J. A., Saleem, O., and Adeoye, H. A. “… connectedness between Google-based investor attention and the fourth industrial revolution assets: The case of FinTech and Robotics & Artificial intelligence stocks.” Technology in Society (2022). [HTML]

Ferrer, X., Van Nuenen, T., and Such…, J. M. “Bias and discrimination in AI: a cross-disciplinary perspective.” IEEE Technology and … (2021). [PDF]

Varona, D. and Suárez, J. L. “Discrimination, bias, fairness, and trustworthy AI.” Applied Sciences (2022). mdpi.com

Heinrichs, B. “Discrimination in the age of artificial intelligence.” AI & society (2022). springer.com

Peters, U. “Algorithmic political bias in artificial intelligence systems.” Philosophy & Technology (2022). springer.com

Gupta, M., Parra, C. M., and Dennehy, D. “Questioning racial and gender bias in AI-based recommendations: Do espoused national cultural values matter?.” Information Systems Frontiers (2022). springer.com

Capraro, Valerio, Austin Lentsch, Daron Acemoglu, Selin Akgun, Aisel Akhmedova, Ennio Bilancini, Jean-François Bonnefon et al. “The impact of generative artificial intelligence on socioeconomic inequalities and policy making.” PNAS nexus 3, no. 6 (2024). oup.com

Ferrara, E. “Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies.” Sci (2023). mdpi.com

Pappalardo, Luca, Emanuele Ferragina, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, Giulio Rossetti, Gizem Gezici et al. “A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions.” arXiv preprint arXiv:2407.01630 (2024). [PDF]

Ferrara, E. “Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci. 2024; 6: 3.” (2023). admin.ch

Wach, Krzysztof, Cong Doanh Duong, Joanna Ejdys, Rūta Kazlauskaitė, Pawel Korzynski, Grzegorz Mazurek, Joanna Paliszkiewicz, and Ewa Ziemba. “The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT.” Entrepreneurial Business and Economics Review 11, no. 2 (2023): 7-30. uek.krakow.pl

Osasona, Femi, Olukunle Oladipupo Amoo, Akoh Atadoga, Temitayo Oluwaseun Abrahams, Oluwatoyin Ajoke Farayola, and Benjamin Samson Ayinla. “Reviewing the ethical implications of AI in decision making processes.” International Journal of Management & Entrepreneurship Research 6, no. 2 (2024): 322-335. fepbl.com

Whang, S. E., Roh, Y., Song, H., and Lee, J. G. “Data collection and quality challenges in deep learning: A data-centric ai perspective.” The VLDB Journal (2023). [PDF]

Ding, Frances, Moritz Hardt, John Miller, and Ludwig Schmidt. “Retiring adult: New datasets for fair machine learning.” Advances in neural information processing systems 34 (2021): 6478-6490. neurips.cc

Kaissis, Georgios A., Marcus R. Makowski, Daniel Rückert, and Rickmer F. Braren. “Secure, privacy-preserving and federated machine learning in medical imaging.” Nature Machine Intelligence 2, no. 6 (2020): 305-311. nature.com

Novelli, C., Taddeo, M., and Floridi, L. “Accountability in artificial intelligence: what it is and how it works.” Ai & Society (2024). springer.com

Busuioc, M. “Accountable artificial intelligence: Holding algorithms to account.” Public administration review (2021). wiley.com

Santoni de Sio, F. and Mecacci, G. “Four responsibility gaps with artificial intelligence: Why they matter and how to address them.” Philosophy & Technology (2021). springer.com

Asatiani, Aleksandre, Pekka Malo, Per Rådberg Nagbøl, Esko Penttinen, Tapani Rinta-Kahila, and Antti Salovaara. “Sociotechnical envelopment of artificial intelligence: An approach to organizational deployment of inscrutable artificial intelligence systems.” Journal of the association for information systems 22, no. 2 (2021): 325-352. aalto.fi

Hutchinson, Ben, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. “Towards accountability for machine learning datasets: Practices from software engineering and infrastructure.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 560-575. 2021. acm.org

Brynjolfsson, E. “The turing trap: The promise & peril of human-like artificial intelligence.” Augmented education in the global age (2023). oapen.org

Desai, Bhavin, Kapil Patil, Asit Patil, and Ishita Mehta. “Large Language Models: A Comprehensive Exploration of Modern AI’s Potential and Pitfalls.” Journal of Innovative Technologies 6, no. 1 (2023). academicpinnacle.com

Raji, Inioluwa Deborah, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. “The fallacy of AI functionality.” In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 959-972. 2022. acm.org

Sarker, I. H. “AI-based modeling: techniques, applications and research issues towards automation, intelligent and smart systems.” SN Computer Science (2022). springer.com

Fügener, Andreas, Jörn Grahl, Alok Gupta, and Wolfgang Ketter. “Will humans-in-the-loop become borgs? Merits and pitfalls of working with AI.” Management Information Systems Quarterly (MISQ)-Vol 45 (2021). eur.nl

Schwartz, Reva, Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall. Towards a standard for identifying and managing bias in artificial intelligence. Vol. 3. US Department of Commerce, National Institute of Standards and Technology, 2022. dwt.com

Liang, Weixin, Girmaw Abebe Tadesse, Daniel Ho, Li Fei-Fei, Matei Zaharia, Ce Zhang, and James Zou. “Advances, challenges and opportunities in creating data for trustworthy AI.” Nature Machine Intelligence 4, no. 8 (2022): 669-677. [HTML]

Zhang, C. and Lu, Y. “Study on artificial intelligence: The state of the art and future prospects.” Journal of Industrial Information Integration (2021). [HTML]

Zha, Daochen, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. “Data-centric artificial intelligence: A survey.” arXiv preprint arXiv:2303.10158 (2023). [PDF]

Alzubaidi, Laith, Jinshuai Bai, Aiman Al-Sabaawi, Jose Santamaría, Ahmed Shihab Albahri, Bashar Sami Nayyef Al-dabbagh, Mohammed A. Fadhel et al. “A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications.” Journal of Big Data 10, no. 1 (2023): 46. springer.com

Matsuo, Yutaka, Yann LeCun, Maneesh Sahani, Doina Precup, David Silver, Masashi Sugiyama, Eiji Uchibe, and Jun Morimoto. “Deep learning, reinforcement learning, and world models.” Neural Networks 152 (2022): 267-275. sciencedirect.com

Minh, D., Wang, H. X., Li, Y. F., and Nguyen, T. N. “Explainable artificial intelligence: a comprehensive review.” Artificial Intelligence Review (2022). [HTML]

Saeed, W. and Omlin, C. “Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities.” Knowledge-Based Systems (2023). sciencedirect.com

Angelov, Plamen P., Eduardo A. Soares, Richard Jiang, Nicholas I. Arnold, and Peter M. Atkinson. “Explainable artificial intelligence: an analytical review.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 11, no. 5 (2021): e1424. wiley.com

Mohseni, Sina, Niloofar Zarei, and Eric D. Ragan. “A multidisciplinary survey and framework for design and evaluation of explainable AI systems.” ACM Transactions on Interactive Intelligent Systems (TiiS) 11, no. 3-4 (2021): 1-45. acm.org

Shin, D. “The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI.” International journal of human-computer studies (2021). academia.edu

Evans, B. D., Malhotra, G., and Bowers, J. S. “Biological convolutions improve DNN robustness to noise and generalisation.” Neural Networks (2022). biorxiv.org

Tao, Lue, Lei Feng, Hongxin Wei, Jinfeng Yi, Sheng-Jun Huang, and Songcan Chen. “Can adversarial training be manipulated by non-robust features?.” Advances in Neural Information Processing Systems 35 (2022): 26504-26518. neurips.cc

Qian, Z., Huang, K., Wang, Q. F., and Zhang, X. Y. “A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies.” Pattern Recognition (2022). [PDF]

Du, Jiaxing. “The impact of artificial intelligence adoption on employee unemployment: A multifaceted relationship.” International Journal of Social Sciences and Public Administration 2, no. 3 (2024): 321-327. researchgate.net

Acemoglu, Daron, David Autor, Jonathon Hazell, and Pascual Restrepo. “Artificial intelligence and jobs: Evidence from online vacancies.” Journal of Labor Economics 40, no. S1 (2022): S293-S340. nber.org

Soueidan, M. H. and Shoghari, R. “The Impact of Artificial Intelligence on Job Loss: Risks for Governments.” Technium Soc. Sci. J. (2024). techniumscience.com

Liu, Y., Meng, X., and Li, A. “AI’s Ethical Implications: Job Displacement.” Advances in Computer and Communication (2023). hillpublisher.com

Hagendorff, T., Bossert, L. N., Tse, Y. F., and Singer, P. “Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals.” AI and Ethics (2023). springer.com

Varsha, P. S. “How can we manage biases in artificial intelligence systems–A systematic literature review.” International Journal of Information Management Data Insights 3, no. 1 (2023): 100165. sciencedirect.com

Bagaric, M., Svilar, J., Bull, M., Hunter, D., and Stobbs, N. “The solution to the pervasive bias and discrimination in the criminal justice system: transparent and fair artificial intelligence.” Am. Crim. L. Rev. (2022). qut.edu.au

Curto, G., Jojoa Acosta, M. F., Comim, F., and Garcia-Zapirain, B. “Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.” AI & society (2024). springer.com

Belenguer, L. “AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical ….” AI and Ethics (2022). springer.com

Budhwar, Pawan, Ashish Malik, MT Thedushika De Silva, and Praveena Thevisuthan. “Artificial intelligence–challenges and opportunities for international HRM: a review and research agenda.” The InTernaTIonal Journal of human resource managemenT 33, no. 6 (2022): 1065-1097. tandfonline.com

Martin, A. S. and Freeland, S. “The advent of artificial intelligence in space activities: New legal challenges.” Space Policy (2021). [HTML]

Korinek, A. and Stiglitz, J. E. “Artificial intelligence, globalization, and strategies for economic development.” (2021). nber.org

Dwivedi, Yogesh K., Laurie Hughes, Elvira Ismagilova, Gert Aarts, Crispin Coombs, Tom Crick, Yanqing Duan et al. “Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy.” International journal of information management 57 (2021): 101994. openrepository.com

Saaida, Mohammed BE. “AI-Driven transformations in higher education: Opportunities and challenges.” International Journal of Educational Research and Studies 5, no. 1 (2023): 29-36. academia.edu

Allioui, Hanane, and Youssef Mourdi. “Unleashing the potential of AI: Investigating cutting-edge technologies that are transforming businesses.” International Journal of Computer Engineering and Data Science (IJCEDS) 3, no. 2 (2023): 1-12. ijceds.com

Gray, Kathleen, John Slavotinek, Gerardo Luis Dimaguila, and Dawn Choo. “Artificial intelligence education for the health workforce: expert survey of approaches and needs.” JMIR medical education 8, no. 2 (2022): e35223. jmir.org

Singh, Rana Jairam. “Transforming higher education: The power of artificial intelligence.” International Journal of Multidisciplinary Research in Arts, Science and Technology 1, no. 3 (2023): 13-18. ijmrast.com

Li, L. “Reskilling and upskilling the future-ready workforce for industry 4.0 and beyond.” Information Systems Frontiers (2022). springer.com

Strauß, S. “Deep automation bias: how to tackle a wicked problem of AI?.” Big Data and Cognitive Computing (2021). mdpi.com

Farahani, M. and Ghasemi, G. “Artificial intelligence and inequality: challenges and opportunities.” (2024). radensa.ru

O’Connor, S. and Liu, H. “Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities.” AI & SOCIETY (2024). springer.com

Scroll to Top