Ethical Concerns in AI-Driven Chemistry Research

1. Introduction to AI-Driven Chemistry Research

 

Artificial intelligence (AI) has revolutionized many fields, including chemistry. Researchers use AI technologies, such as machine learning and deep learning, to analyze chemical data and build predictive models. AI applications in chemistry have gained momentum faster than any scientific discipline in the last decade. Some traditional uses of AI in chemistry are cheminformatics, QSAR modeling, and ADMET modeling. At the same time, deep learning emerged as a powerful AI technology in mid-2010. Based on its ability to extract hidden data representation, it has been applied to various scientific fields. In chemistry research, deep learning has been applied to molecular generation and property prediction. Recently, large pre-trained generative models have been developed to accelerate chemistry research. These research efforts have greatly advanced modeling complex chemical systems and dramatically accelerated minerals.

 

A wide range of computing technologies in AI have been developed, extensively published, and applied to chemistry research. AI technologies can be categorized into three main types: data-driven AI modeling, lattice-based AI modeling, and first-principles AI modeling. The key difference among the three types of AI modeling is how to generate input features from chemical systems. Data-driven AI models represent chemical systems by chemical descriptors obtained from empirical rules, which imply conventional cheminformatics and machine learning approaches. Lattice-based AI models represent chemical systems as networks consisting of discrete atomic sites and chemical bonds, and chemical descriptors of these networks are generated via Monte Carlo simulation algorithms. The first-principles AI model directly incorporates basic physical laws, such as quantum mechanics, into the model, and the structure of input features cannot be determined by a single research team. AI technologies and applications in chemistry have been summarized, and the related ethical concerns predicted.

 

The acceleration of chemistry research via the deep AI-driven computing techniques raises issues as a double-edged sword. Underlying computing technologies sometimes bring about ethical concerns, such as justice, accountability, and publicity. For example, inquiries into academic integrity heightened in the wake of increased use of machine learning in studying fundamental scientific problems. Additionally, the potential misuse of sensitive chemical information is an ethical concern brought by rapid increases in online sharing of chemical databases. Recently, a few companies announced that they might commercialize chemistry-based generative artificial intelligence technologies, which could lead to uncontrolled criminal activities in the chemistry community. Such concerns must be adequately addressed with proper ethical considerations when developing and applying deep AI technologies in chemistry research. (Mita, 2022)(Sevnarayan and Potter2024)(Yusuf et al.2024)(Sbaffi and Zhao, 2022)

 

1.1. Overview of AI Applications in Chemistry

 

The relentless evolution of Computer-Aided Detection (CAD) together with the rise of Large Language Models (LLMs) has changed the chemistry software landscape in substantial ways in recent years. Chemoinformatics traditionally provides diverse, growing datasets, sophisticated descriptor sets, and modeling techniques. On the other hand, deep learning (DL) approaches provide different modeling techniques beyond handcrafted descriptors. Both changes have the potential to fundamentally change what sort of chemistry software can be written. Further, efforts towards democratization and open science place increasing chances for academics and startups to develop software, while existing companies come under increasing pressure to innovate faster.

 

Artificial Intelligence (AI) has the potential to turn the whole chemical discipline upside down, much like AI-driven chatbots and the deep learning revolution had a significant impact on multiple disciplines in academia and industry, including finance, electronics, and chemistry. Recent successes with AlphaFold, remarkable progress in drug redesign using state-of-the-art generative models, and self-learning agents suggest that AI-driven modeling, simulation, and optimization of chemical systems may be within reach. But it is still wide open how far AI can actually go in chemistry, how sound and useful the results are, what the conditions for sound applications are, and what ethical concerns arise.

 

Machine Learning (ML) is a sub-discipline of AI and should be understood broadly, as software-based modeling of the world using statistical methods based on examples. Chemoinformatics is understood here as the discipline of applying computational approaches to chemical problems. Within chemoinformatics, ML is the sub-discipline where ML approaches apply to chemical problems, and deep learning is seen as a specific ML approach. On the other extreme, the term chemoinformatics is sometimes used to cover artificial intelligence (AI) approaches. (Ananikov, 2024)(Baum et al.2021)(Kuntz and Wilson, 2022)(Ayres et al.2021)(Biriukov and Vácha, 2024)

 

2. Ethical Principles in Scientific Research

 

Scientific research involves creativity and innovation, but due to its far-reaching effects on humankind, it must be conducted in an ethical way. The way laboratories are managed, the safety procedures to comply with, and the treatment offered to laboratory workers depend on national regulations and institutional policies. However, beyond complying with policies and regulations, researchers share a responsibility towards society. Three ethical principles are used worldwide to develop and evaluate the policies for scientific research: beneficence, non-maleficence, and justice.

 

The ethical principle of beneficence holds that scientific research should promote and maximize benefits for society. Potential benefits include improving quality of life, increasing life expectancy, fostering economic quality, and protecting cultures and environments. This principle requires researchers to make an honest assessment of the efficacy of their research. Experimental evidence suggests that AI is capable of many tasks because of its enormous capacity to memorize and regurgitate information, not necessarily because it has the ability to reason. One may wonder whether drawing conclusions using AI is still science.

 

The ethical principle of non-maleficence holds that scientific research should avoid or minimize adverse effects to society. Potential harms of research include suffering, loss of control, privacy abuse, job loss, and environmental disaster. This principle requires researchers to make an honest assessment of the effects of research. Understanding the limitations of AI is critical, since the deployment of an AI-based system blindly without understanding the risks involved may have dire consequences. Implementing AI-driven chemistry and life sciences research arms race without safety nets is a big concern due to the potential use of the research toolkit to inflict harm on society. The most direct harm from such AI-driven chemistry research would be the development of chemical and biological agents for terrorist purposes. AI-driven medicinal chemistry concepts implemented at schools for rapid drug discovery against certain viruses as research activity of student years poses a risk of developing therapeutics that would be of no good for society but could be exploited by organized crime and terrorist organizations. (Qamar, 2024)(Koblentz, 2020)(Rubinic et al.2024)(Krin, 2023)(Кустов et al., 2023)

2.1. Beneficence and Non-Maleficence

 

In research and its applications, the concepts of “beneficence” and “non-maleficence” are crucial. The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury. The interests of the subject must always prevail over the interests of science and society. These doctrines provide the foundation for establishing an ethical committee by institutions, universities, and industry-sponsored research, and these two principles are broadly interpreted to cover all eventualities of harm that may be incurred to humans in the course of research. Although the focus of the principles is on human research, there is a broad consensus that they equally apply to research involving non-human animals. The terms of reference for the organization of Ethics-Related Expert Group on Multilateral/Bilateral Agreements concerning the use of animals for testing chemicals states that “the 3Rs concept (Replacement, Reduction, and Refinement) is the foundation and core principle for the ethical justification of all research utilizing animals.” In a similar vein, the Convention on the Protection of Animals Used for Scientific Purposes obligates parties to “ensure that the use of animals for experimental and other scientific purposes is covered by ethical review.” However, since there are grey areas where the concepts of relevance, human interest, human benefit, or necessity have been interpreted differently across organizations and nations, “cautious optimism” that these public declarations would result in improvements in conditions for laboratory animals has been expressed in the past. This note takes a critical look from outside the boundaries of the chemical community at the practice of “beneficence” and “non-maleficence” offered by the chemical community and asks whether the chemical community is aware of the accumulating disputes between researchers and regulators. In doing so, the hope is to contribute to a better understanding of regulations of research and its consequences and generally foster a better preparedness for and response to future criticisms of research and its regulations and the ethical compromises that investigators and institutions may be willing to take in order to keep their research intact. The principles of beneficence and non-maleficence and their counterparts in the regulation of the ethical consideration of research in the chemical industry are particularly relevant concerning the selection of experimental protocols and the design of reagents and products using computer-aided drug design. The principle of beneficence focuses on maximizing benefits while preventing or minimizing the risks of harm. Both principles require an extraordinary effort to ensure that research, testing, and products will not harm subjects, society, the environment, and biodiversity resources. In principle, this includes efforts to ensure that the work will not develop into a major catastrophe, as has happened in nuclear proliferation, misuse of biotechnologies, and GMOs, for example. In practice, however, there are restrictions on what can be considered a responsibility at the episteme level. (Pietilä et al.2020)(Cheraghi et al.2023)(O’Donoghue, 2023)(Brear and Gordon, 2021)

 

3. Ethical Considerations in AI-Driven Chemistry Research

 

The rapid adoption of AI technologies in chemistry research has indeed raised ethical concerns that need to be addressed. These concerns stem from the unique challenges posed by chemistry data, analysis, and insights, which are often complex and difficult to interpret. The commitment to ethical AI-driven chemistry research is not only necessary for the continuation of research but also for the protection of people and the environment. Given the potential for catastrophic abuse of AI technology, the AI-driven chemistry community must proactively mitigate risks and avoid pitfalls already encountered by other disciplines. Although chemistry AI is currently a large field, it will only grow larger. Trust among researchers, funding agencies, regulatory agencies, and the general public is crucial. Trust takes time to build and is fragile. If researchers use AI resources irresponsibly, much like misleading research, it could damage the reputation of all AI-driven chemistry research. These obstacles could hinder or eliminate benefits from innovative chemistry AI tools.

 

Addressing these considerations will help ensure continued growth and successful implementation of AI technology in chemistry research. The first step in addressing ethical issues is to clearly identify them. Ethical concerns must be considered for all actors in AI-driven chemistry research, including research groups, commercial vendors of AI development tools, and commercial vendors of AI-driven chemistry tools. Ethical AI considerations for chemistry researchers fall into three broad categories: 1) data privacy and security when using third-party chemistry datasets or each other’s AI-driven chemistry tools, 2) safety concerns regarding the use and accessibility of AI-driven chemistry tools with the potential for misuse, and 3) issues related to developing an AI-driven chemistry tool and the perception of fairness across the chemistry research community. (Gadade et al.2024)(Leontidis, 2024)(Jantunen et al.2024)(Carobene et al.2024)

 

3.1. Data Privacy and Security

 

Rapid advancements in artificial intelligence and machine learning offer immense potential to improve workflow across multiple fields, including academia and industry. Chemistry is one of many fields that utilize data-centric techniques for studying chemistry-related problems and the course of chemical reactions. Datasets of varying sizes and content are central to AI/ML workflows, where the challenge lies in developing models that extract information from data. Due to the competitive nature of chemical research, preserving data privacy and security across all stakeholders is essential.

 

Data privacy and security concerns affect chemistry research across public and private sectors. Academic researchers engage in collaborative research projects to obtain impactful results quicker than independently possible. Collaborative research includes sharing data across institutions, where concerns about the confidentiality of sensitive data like formulary information and structural blueprints arise. Compliance with various privacy policies is required to avoid the misuse of confidential data. Such policies also hinder large consortia projects, resulting in smaller cross-institution collaborations, which negatively affect the outcome of discoveries due to a lack of model diversity. AI/ML research on chemistry data is additionally hampered due to public datasets not representing the diversity of chemistry research questions, where compatibility with AI/ML workflows further reduces the number of public datasets.

 

In industries that work with sensitive intellectual property, the confidentiality and security of data are of utmost concern. IP-sensitive data are often stored locally or on private clouds, where compatibility with cloud-based AI/ML technologies and the agreement to store sensitive data on external servers is required. Ethics regarding IP-sensitive data also arise when less data-hungry models employ public chemistry datasets to transfer knowledge to proprietary data. Model transfer to proprietary data raises the question of what information is transferred to these models that could be used to reconstruct test data outside of institutions. Security vulnerabilities in AI/ML models also arise through adversarial attacks where models are tested on unfamiliar data apart from generalization capabilities with the aim to infer information about the training data, such as its distribution. (Abduldayan et al.2021)(Chenthara et al., 2020)(Jean-Quartier et al., 2022)(Rantasaari2022)

 

4. Bias and Fairness in AI Algorithms

 

Artificial intelligence (AI) has become increasingly prevalent in many aspects of life. More recently, AI is being utilized within chemistry and drug discovery pipelines. However, with any technology, there may be inherently negative effects. There is concern around whether AI outputs are trustworthy, fair, and unbiased. This notion is particularly troubling within the realm of chemistry and drug discovery, since it could potentially result in falsely labeled compounds or missing compounds that need to be rediscovered.

 

4.1 Dataset constraints There is a concern that the input to AI pipelines, the dataset, may be missing or improperly annotated compounds since chemical space is extremely vast. It is reported that a dataset containing approximately 200 million compounds can provide good results for an AI-driven generative system. However, since the medicinal space of interest contains approximately 22 million targets, there may inherently be poorly described minorities in this set. If this dataset is directly leveraged, there may be less worthy or poorly represented compounds that go unconsidered for further experimentation. The potential risk is that a good lead compound may go undiscovered as a result of this.

 

This is similar to the concerns present in the current implementations of generative AI in research settings on other high-value topics, including grant writing in academia. The fear is that there will be realism-driven outputs that cannot be trusted since it is probable that the system outputs will be representative of that dataset and potentially of a drift they adhere to as a result.

 

4.2 Generation outputs There is also concern with biases, particularly since AI systems have the ability to generate outputs that may favor an unwanted or misleading idea. This concern is problematic within AI drug discovery since it could mean that a certain group of analogues is favored. Specifically, the study focused primarily on neurodegenerative targets.

 

Overall, with AI-powered chemistry experimental pipelines being projected, it is conceivable that certain analogues or compounds may be pushed to the front as a result of a compound’s undesired structural scaffolds or similarities. So the questions remain: how do researchers deem AI outputs trustworthy? How does the trust in AI outputs, fair reputation, and unbiased results be accommodated? There seems to exist an unexplored chasm that needs to be crossed to bridge this trust chasm and properly leverage cutting-edge technology to its full advantage. (Ahadian and Guan2024)(Mak et al.2023)(Alizadehsani et al.2024)(Belenguer, 2022)

 

5. Transparency and Explainability in AI Models

 

Transparency and explainability are essential ethical concerns for the deployment and regulatory implementation of any AI models used in the generation of chemical knowledge. Transparency refers to a system’s ability to communicate its limitations, explain how it works, replicate its training process on new data, and allow users to access intermediate decisions. In contrast, explainability indicates an AI model’s ability to break down and conceptualize its predicted model outputs. In more advanced terms, explainability requires the interpretability of these concepts, meaning that explanations can be understood and reasonably recognized by a human observer. As generations of AI models, especially of deep learning methods, become more complex and beyond the intuition of researchers who trained them, novel AI architecture capabilities must be accompanied by suitable transparency and explainability efforts.

 

The recent emergence of sophisticated AI models highlights concerns regarding how these systems impact the predictions of knowledge, which modeling outputs should or should not be challenged, and how AI models arrive at these modeling outputs. As questions regarding the limits of AI models and how these limits should be communicated arise, researchers must consider how the AI model was originally fashioned by the developers. Modern AI systems consist of collections of unmanageable amounts of unknown parameters tailored to fit training data and highlight correlations between the predictors and outcomes. This fitting process uncovers knowledge without explicit causal relationships; thus, even the developers of such AI models do not completely understand the internal workings or suggest precise conditions in which their predictions could or could not hold. The unmanageability and lack of interpretability of these parameters pose challenges for any explanation due to the extensive token of knowledge and operations that transpire inside a realization of the model and the lack of intuitive and meaningful representations of its internal decisions. AI prediction models currently used for chemical predictions are based on large deep learning architectures that incorporate elements of time complexity and non-linearity, thereby making it practically impossible to simply share the parameters of the models and expect reproducibility in any relevant scientific sense.

 

The scientific community must consider how AI modeling results can be reasonably interpreted, understood, communicated, and continued through follow-up investigations. Efforts to better understand the workings of AI models employed in chemical research and generation of knowledge must be communicated to users. Concerns regarding how a chemical knowledge AI model ‘thinks’ or progresses in a given chemical research or knowledge-generation process must be discussed with the AI model developers and visible to the users. AI-driven chemical degradation proposals must be used in tandem with transparency and explainability efforts on the AI models used to generate such proposals. The AI model prediction confidence, limits, and alternative prediction modeling outputs must be quantified and evaluated. These understandings must moreover be accessible to the wider scientific community for scrutiny and accountability consideration conforming with the scientific ethical practice. (Hermann et al., 2021)(Vittoria et al.2024)(Yuan et al., 2024)(Zednik and Boelsen, 2022)(Alizadehsani et al.2024)

 

6. Accountability and Responsibility in AI-Driven Research

 

The increasing integration of artificial intelligence (AI) into scientific research has brought forth new ethical concerns. It is crucial to explore how accountability and responsibility are redefined in AI-driven chemistry research, especially in light of cases where the use of AI-generated results has led to negative outcomes.

 

An investigation into possible violations of research integrity in a study was announced. The study, which found that limiting the use of disinfectants in multihour building re-entry after cleaning and disinfection during the COVID-19 pandemic would not increase transmission risk, relied on several AI-generated results. The investigation aimed to determine the accuracy of these AI results and whether their inaccuracies may have influenced the publication of incorrect conclusions. On the one hand, AI-generated results can help researchers obtain novel conclusions in a fundamentally new way. However, their use raises questions about how the responsibility for the inaccuracy of these results is reassigned among the research staff, given that their use was indeed approved by the authors of the study.

 

In light of the mistaken conclusions drawn in the environmental chemistry study, it is necessary to consider how accountability and responsibility are redefined in the light of AI-driven research. Ethics boards in a chemistry department were presented with samples of AI-generated chemical reaction results and asked to consider related ethical questions. An essential question posed was whether the AI-generated predictions should be regarded as ‘published’ results. This question is particularly pertinent to the investigation since a paper can only be considered to have claimed findings that are fully substantiated and verified and thereby can be held responsible for the accuracy of its conclusions if the cited predictions were regarded as ‘published.’ Furthermore, if AI-generated results are considered unpublished, current chemical indexing databases would not provide access to the cheap but erroneous predictive power of models for many chemistries. (Borger et al.2023)(Elbadawi et al.2024)(Doshi and Hauser, 2024)

 

7. Impact of AI on Reproducibility and Integrity of Research

 

In response to the heightened concern for reproducibility and integrity of research in the sciences, AI-driven approaches are employed to evaluate thousands or millions of peer-reviewed papers, datasets, and supplementary figures for red flags indicative of identified problems. Such solutions can assist in identifying misbehaviors and misconduct. For example, an internally deployed tool uses several AI models to analyze proposed research papers for “cohorts,” or groups of possible authors whose patterns of writing suggest collusion. As different companies and academic institutions race to deploy and develop similar approaches, potential limitations exist. Institutional or funder-level policy and oversight structures must be established to prevent abuse or to address concerns over bias, transparency, and accuracy. Codified protocols for adjudicating these red flags and assurances of fairness and accountability are also paramount. A solution that is “too easy,” such as dismissing papers flagged for review or simply rejecting such reviewers, could further undermine trust in the scientific endeavor.

 

Importantly, AI-powered tools could assist in the governance of research integrity at the institutional and funder levels. AI-driven approaches could assist in the characterization of the “research landscape” and the monitoring of red flags surrounding funding, publication, and the eventual influence of research outcomes. Concerns have been raised over the accumulation of influence by a select few teaching hospitals or academic medical institutions. Borrowing ideas from the algorithmic trading of stocks, sub-second transactions in financial markets are governed by cameras analyzing trading floors faster than any human could. Such an approach could be beneficial when applied to the dissemination of research. As AI tools are developed to output information, such results could also be tracked for rapid analysis of entities requesting or purchasing research outputs, such as funding agencies or large pharmaceutical industries. Such a system could help ensure that research findings are disseminated more widely and equitably, increasing the pace of innovation and ultimately benefiting the public, scientific, and medical communities. (Vasey, 2023)(Khemasuwan and Colt, 2021)(Rubinic et al.2024)(Ghadiri, 2022)

 

8. Case Studies and Examples of Ethical Dilemmas in AI-Driven Chemistry Research

 

The pervasiveness of artificial intelligence (AI) in various fields, including chemistry and pharmaceuticals, suggests a survival of the fittest approach. Those who learn to use it to their advantage will thrive, while others will become obsolete. The accelerated pace of research demands critical thinking and a change in mindset from a focus on publications to thinking deeper about the consequences of all active chemistry research. It demands a re-examination of legacy decision-making processes, as AI launches a new era of black boxes, safe space simulations, and traditional academic ethics having outsized consequences. Academic chemists are partly responsible for thinking this through, as AI will form the basis of future decision-making regarding research funding and continuation; if desired hydrogen atoms in konjac glucomannan hydrogels are not brought forth by AI or algorithms, the research will be stopped, and potentially grave consequences will arise. A problem in itself, as AI and algorithms are not morally neutral but have biases that are amplified, making it necessary to consider important ethical questions beforehand.

 

Four AI algorithm applications in organic chemistry that have received academic attention and funding and a fifth in development but already being marketed are discussed: automatic microwave synthesis of commodity chemicals and pharmaceuticals; chemical quantum mechanical modeling of molecular reactivity; bioorthogonal drug delivery; and greedy search algorithms to synthesize molecular graphs. Each can potentially have far-reaching consequences in the academic and industrial setting. Concerns around such implementations that ultimately cannot be ignored are portrayed. Such ethical dilemmas concern hypotheses without qualified chemists; disregarding the ingenuity and creativity needed in laboratories; experimentalists becoming ‘data cleaners’ getting faster and cheaper results from automated reactions and quantum chemical calculations; fragmentation of the chemical disciplines; and very few chemists or chemical companies potentially owning entire pharmaceutical libraries.

 

The five vignettes raise explicit ethical dilemmas with the potential to change academic and industrial organic chemistry as conventionally understood. Most concerns relate to possible unintended consequences behind the irrational approach of AI replacing full responsibility with computers and algorithms. It takes a broad understanding of ethics to ultimately consider the unintended consequences of AI becoming research decision-makers. This can potentially fragment the chemical disciplines, as chemists owning AI libraries dominating exploration will put others out of work in an arms race similar to the current divide between industrialized and developing countries, and economic inequality will increase. This imagined future is now, and general ethical concerns need to be raised as SOPs, boundaries, and guarding against unintended consequences expressed as compensatory mechanisms that promote vigilance over blind acceptance of AI-driven decisions. The fastest and broadest application of AI to research and development needs pausing to think about the implications of shortcutting thorough vetting that have nearly always gone awry in retrospect. (Scatiggio, 2020)(Devillers et al.2021)(Edenberg and Wood2023)(Keles, 2023)

 

9. Regulatory Frameworks and Guidelines for Ethical AI Research

 

In order to combat the threats that can emerge from the use of AI in chemistry, the development of ethical codes of conduct and regulatory frameworks will be crucial. Regulations that specifically address the use and development of AI in chemistry have not yet been established, but existing legislative initiatives have taken the first steps toward such regulations. As the most advanced of AI technologies in use today, the capabilities of large language models are a prime focus for scrutiny. These models are a type of deep learning model trained on extensive text data to produce reasonable text contents. However, apart from the desired outcome of new and relevant text, these models can also output toxic text such as hate speech, disinformation, and sexual innuendos. In addition, since these models are trained on text from the internet, it is impossible to certify whether all the underlying data sources are scientifically valid, up-to-date, and free of bias.

 

In the European Union, the Artificial Intelligence Act has been proposed to deal with concerns regarding high-risk AI applications. AI-based systems are categorized on a levels of risk basis and, amongst other obligations, high-risk systems will have to undergo rigorous pre- and post-market assessments. In December 2021, the European Union’s long-awaited White Paper on AI was published. The broad policy approach set out in the White Paper is that the EU will not pursue legislation at this time, but rather create an AI regulatory ecosystem of overlapping and complementary rules. In July 2022, a proposal for an AI Act setting out legal obligations that would apply to AI systems was published. Like the White Paper, the Act focuses mainly on the risks associated with AI systems deployed within the EU.

 

However, it is also essential to equip chemists who work with generative models with a proper ethical responsibility framework with which they can assess their own actions. Responsible Research and Innovation has received attention from both policymakers and researchers since the early 2000s. This entails examining long-term societal impacts of a technology, assessing the desirability of the technology for society and its potential for societal good. Beyond science and technology, the key approach is envisioning scenarios, both positive and negative, about how society and technology could co-evolve in the next decades with the aim of choosing a desirable path forward. Since emerging technologies can radically shape the future of chemistry, a short exercise is proposed after each standard cheminformatics protocol which uses a generative model. (Baum et al.2021)(Struble et al.2020)(Schmeisser et al.2023)(Jiménez-Luna et al.2021)

 

10. Collaboration and Interdisciplinary Approaches to Address Ethical Concerns

 

Efforts to address ethical concerns related to AI-driven chemistry research necessitate collaboration and interdisciplinary approaches among researchers, ethicists, policymakers, and industry stakeholders. While adjusting to and actively mitigating concerns alone is challenging, it is possible to approach a more responsive community by working together. Collaboration among AI experts from various fields, such as biology, sociology, and rhetoric, has the potential to address specific applications in research practices and develop fields of knowledge that have not yet been established. Policy approaches could be developed in a welcoming manner, contributing to positive perceptions of AI across sectors. Efforts could focus on understanding the nuances and reactions surrounding building AI within the same society across comment sections, platforms, and actors, including NGOs and industry leaders. Interdisciplinary perspectives and approaches could build on current understandings of AI-generated results. Disciplines such as sociology and literature can develop a more comprehensive understanding of co-evolution and change over time, emphasizing the complexities and contributions of each side’s context, such as societal, corporate, and academic expectations. Robust approaches for reconfiguring AI language models in response to human concerns could include developing vocabularies and beliefs surrounding positive interactions, as well as exploring alternative types and styles of instigating collaborations between humans and AI. The creation of collaborative platforms for interdisciplinary projects is understandable and could enhance societal conversations regarding the role of AI in society. Such forums could encourage the co-construction of languages about AI use cases, including the expectations, concerns, and questions most relevant to different cultural and disciplinary contexts. These forums could facilitate the contribution of in-depth contextualized knowledge, interests, and worries, which would provide the basis for more nuanced ventures and experiments than those currently devised primarily within the tech sphere. (Hermann et al., 2021)(Ananikov, 2024)(Back et al.2024)(Rial, 2024)(Chen et al., 2024)

 

11. Future Directions and Emerging Ethical Issues in AI-Driven Chemistry Research

 

In addition to the ongoing ethical concerns, there are emerging ethical considerations that may arise as AI-driven chemistry research evolves. These concerns pertain to the unforeseen consequences of deploying AI technologies under inappropriate contexts, the ethical significance of AI-generated knowledge, and the growing influence of commercial and corporate actors in the AI chemistry interface.

 

As AI technologies proliferate within the field of chemistry, the effects on chemists and chemistry are likely to be complex, indirect, and unintended. Research has shown that technology may not work as expected and can lead to chilling effects on scientific research and inquiry. As AI-generated knowledge becomes influential in shaping future research endeavors and emerging aspects of ethics and trust, there may be unforeseen consequences tied to extensive and unquestioned AI dependence on knowledge crowded out by non-AI knowledge.

 

 

The demise of chemistry as a creative and entrepreneurial endeavor has been foreseen, resulting in the ethical question of the significance of knowledge generated and owned by AI systems. This echoes concerns of other professions, questioning the large language models used to generate chemical knowledge. As the interface between AI and chemistry is increasingly shaped by the practices of its largest commercial operators, ethical questions arise regarding the influence and ownership of this knowledge and its role in shaping future chemistry agendas. Given that chemicals are the building blocks of all consumer products, there will be a natural tension between public interest and commercial interests. (Ananikov, 2024)(Ananikov, 2024)(Rial, 2024)(Back et al.2024)

References:

Mita, S. “AI proctoring: Academic integrity vs. student rights.” Hastings LJ (2022). uclawsf.edu

Sevnarayan, Kershnee, and Mary-Anne Potter. “Generative Artificial Intelligence in distance education: Transformations, challenges, and impact on academic integrity and student voice.” Journal of Applied Learning and Teaching 7, no. 1 (2024). researchgate.net

Yusuf, Abdullahi, Nasrin Pervin, and Marcos Román-González. “Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives.” International Journal of Educational Technology in Higher Education 21, no. 1 (2024): 21. springer.com

Sbaffi, L. and Zhao, X. “Evaluating a pedagogical approach to promoting academic integrity in higher education: an online induction program.” Frontiers in Psychology (2022). frontiersin.org

Ananikov, V. P. “Top 20 Influential AI-Based Technologies in Chemistry.” Artificial Intelligence Chemistry (2024). sciencedirect.com

Baum, Zachary J., Xiang Yu, Philippe Y. Ayala, Yanan Zhao, Steven P. Watkins, and Qiongqiong Zhou.

“Artificial intelligence in chemistry: current trends and future directions.” Journal of Chemical Information and Modeling 61, no. 7 (2021): 3197-3212. acs.org

Kuntz, D. and Wilson, A. K. “Machine learning, artificial intelligence, and chemistry: How smart algorithms are reshaping simulation and the laboratory.” Pure and Applied Chemistry (2022). degruyter.com

Ayres, Lucas B., Federico JV Gomez, Jeb R. Linton, Maria F. Silva, and Carlos D. Garcia. “Taking the leap between analytical chemistry and artificial intelligence: A tutorial review.” Analytica Chimica Acta 1161 (2021): 338403. [HTML]

Biriukov, D. and Vácha, R. “Pathways to a shiny future: Building the foundation for computational physical chemistry and biophysics in 2050.” ACS Physical Chemistry Au (2024). acs.org

Qamar, B. “Risks of Bioterrorism Escalating Due to Artificial Intelligence.” Pakistan Research Journal of Social Sciences (2024). prjss.com

Koblentz, G. D. “Emerging technologies and the future of CBRN terrorism.” The Washington Quarterly (2020). [HTML]

Rubinic, Igor, Marija Kurtov, Ivan Rubinic, Robert Likic, Paul I. Dargan, and David M. Wood. “Artificial intelligence in clinical pharmacology: a case study and scoping review of large language models and bioweapon potential.” British Journal of Clinical Pharmacology 90, no. 3 (2024): 620-628. [HTML]

Krin, A. “Artificial intelligence: possible risks and benefits for BWC and CWC.” (2023). cbwnet.org

Кустов, МВ, Мельниченко, АС, and Калугін, ВД “Minimization of the harmful effect from emergency situations with the pollution of chemical and radioactive substances into the atmosphere.” (2023). nuczu.edu.ua

Pietilä, Anna-Maija, Sanna-Maria Nurmi, Arja Halkoaho, and Helvi Kyngäs. “Qualitative research: Ethical considerations.” The application of content analysis in nursing science research (2020): 49-69. [HTML]

Cheraghi, Rozita, Leila Valizadeh, Vahid Zamanzadeh, Hadi Hassankhani, and Anahita Jafarzadeh. “Clarification of ethical principle of the beneficence in nursing care: an integrative review.” BMC nursing 22, no. 1 (2023): 89. springer.com

O’Donoghue, K. “Learning analytics within higher education: Autonomy, beneficence and non-maleficence.” Journal of Academic Ethics (2023). [HTML]

Brear, M. R. and Gordon, R. “Translating the principle of beneficence into ethical participatory development research practice.” Journal of International Development (2021). [HTML]

Gadade, Dipak D., Deepak A. Kulkarni, Ravi Raj, Swapnil G. Patil, and Anuj Modi. “Pushing Boundaries: The Landscape of AI‐Driven Drug Discovery and Development with Insights Into Regulatory Aspects.” Artificial Intelligence and Machine Learning in Drug Design and Development (2024): 533-561. [HTML]

Leontidis, G. “Science in the age of AI: How artificial intelligence is changing the nature and method of scientific research.” (2024). abdn.ac.uk

Jantunen, Marianna, Richard Meyes, Veronika Kurchyna, Tobias Meisen, Pekka Abrahamsson, and Rahul Mohanani. “Researchers’ Concerns on Artificial Intelligence Ethics: Results from a Scenario-Based Survey.” In Proceedings of the 7th ACM/IEEE International Workshop on Software-intensive Business, pp. 24-31. 2024. acm.org

Carobene, Anna, Andrea Padoan, Federico Cabitza, Giuseppe Banfi, and Mario Plebani. “Rising adoption of artificial intelligence in scientific publishing: evaluating the role, risks, and ethical implications in paper drafting and review process.” Clinical Chemistry and Laboratory Medicine (CCLM) 62, no. 5 (2024): 835-843. degruyter.com

Abduldayan, Fatimah Jibril, Fasola Petunola Abifarin, Georgina Uchey Oyedum, and Jibril Attahiru Alhassan. “Research data management practices of chemistry researchers in federal universities of technology in Nigeria.” Digital Library Perspectives 37, no. 4 (2021): 328-348. futminna.edu.ng

Chenthara, S., Ahmed, K., Wang, H., Whittaker, F., and Chen, Z. “Healthchain: A novel framework on privacy preservation of electronic health records using blockchain technology.” Plos one (2020). plos.org

Jean-Quartier, C., Rey Mazón, M., Lovrić, M., and Stryeck, S. “Collaborative data use between private and public stakeholders—a regional case study.” Data (2022). mdpi.com

Rantasaari, Jukka. “Multi-stakeholder research data management training as a tool to improve the quality, integrity, reliability and reproducibility of research.” LIBER Quarterly: The Journal of the Association of European Research Libraries 32, no. 1 (2022): 1-54. liberquarterly.eu

Ahadian, Pegah, and Qiang Guan. “AI Trustworthy Challenges in Drug Discovery.” In International Workshop on Trustworthy Artificial Intelligence for Healthcare, pp. 1-12. Cham: Springer Nature Switzerland, 2024. [HTML]

Mak, Kit-Kay, Yi-Hang Wong, and Mallikarjuna Rao Pichika. “Artificial intelligence in drug discovery and development.” Drug Discovery and Evaluation: Safety and Pharmacokinetic Assays (2023): 1-38. nih.gov

Alizadehsani, Roohallah, Solomon Sunday Oyelere, Sadiq Hussain, Senthil Kumar Jagatheesaperumal, Rene Ripardo Calixto, Mohamed Rahouti, Mohamad Roshanzamir, and Victor Hugo C. De Albuquerque. “Explainable artificial intelligence for drug discovery and development-a comprehensive survey.” IEEE Access (2024). ieee.org

Belenguer, L. “AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical ….” AI and Ethics (2022). springer.com

Hermann, E., Hermann, G., and Tremblay, J. C. “Ethical artificial intelligence in chemical research and development: a dual advantage for sustainability.” Science and Engineering Ethics (2021). springer.com

Vittoria Togo, Maria, Fabrizio Mastrolorito, Angelica Orfino, Elisabetta Anna Graps, Anna Rita Tondo, Cosimo Damiano Altomare, Fulvio Ciriaco, Daniela Trisciuzzi, Orazio Nicolotti, and Nicola Amoroso. “Where developmental toxicity meets explainable artificial intelligence: state-of-the-art and perspectives.” Expert Opinion on Drug Metabolism & Toxicology 20, no. 7 (2024): 561-577. tandfonline.com

Yuan, Y., Chaffart, D., Wu, T., and Zhu, J. “Transparency: The Missing Link to Boosting AI Transformations in Chemical Engineering.” Engineering (2024). sciencedirect.com

Zednik, C. and Boelsen, H. “Scientific exploration and explainable artificial intelligence.” Minds and Machines (2022). springer.com

Borger, Jessica G., Ashley P. Ng, Holly Anderton, George W. Ashdown, Megan Auld, Marnie E. Blewitt, Daniel V. Brown et al. “Artificial intelligence takes center stage: exploring the capabilities and implications of ChatGPT and other AI‐assisted technologies in scientific research and education.” Immunology and cell biology 101, no. 10 (2023): 923-935. wiley.com

Elbadawi, Moe, Hanxiang Li, Abdul W. Basit, and Simon Gaisford. “The role of artificial intelligence in generating original scientific research.” International Journal of Pharmaceutics 652 (2024): 123741. sciencedirect.com

Doshi, A. R. and Hauser, O. P. “Generative AI enhances individual creativity but reduces the collective diversity of novel content.” Science Advances (2024). science.org

Vasey, B. “… systems based on artificial intelligence: an application to postoperative complications and a cross-specialty reporting guideline for early-stage clinical evaluation.” (2023). ox.ac.uk

Khemasuwan, D. and Colt, H. G. “Applications and challenges of AI-based algorithms in the COVID-19 pandemic.” BMJ Innovations (2021). researchgate.net

Ghadiri, P. “Artificial Intelligence Interventions in the Mental Healthcare of Adolescents.” (2022). mcgill.ca

Scatiggio, V. “… the issue of bias in artificial intelligence to design ai-driven fair and inclusive service systems. How human biases are breaching into ai algorithms, with severe impacts ….” (2020). polimi.it

Devillers, Laurence, Françoise Fogelman-Soulié, and Ricardo Baeza-Yates. “AI & human values: Inequalities, biases, fairness, nudge, and feedback loops.” Reflections on artificial intelligence for humanity (2021): 76-89. [HTML]

Edenberg, Elizabeth, and Alexandra Wood. “Disambiguating algorithmic bias: from neutrality to justice.” In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pp. 691-704. 2023. philpapers.org

Keles, S. “Navigating in the moral landscape: analysing bias and discrimination in AI through philosophical inquiry.” AI and Ethics (2023). [HTML]

Struble, Thomas J., Juan C. Alvarez, Scott P. Brown, Milan Chytil, Justin Cisar, Renee L. DesJarlais, Ola Engkvist et al. “Current and future roles of artificial intelligence in medicinal chemistry synthesis.” Journal of medicinal chemistry 63, no. 16 (2020): 8667-8682. acs.org

Schmeisser, Sebastian, Andrea Miccoli, Martin von Bergen, Elisabet Berggren, Albert Braeuning, Wibke Busch, Christian Desaintes et al. “New approach methodologies in human regulatory toxicology–Not if, but how and when!.” Environment International 178 (2023): 108082. sciencedirect.com

Jiménez-Luna, José, Francesca Grisoni, Nils Weskamp, and Gisbert Schneider. “Artificial intelligence in drug discovery: recent advances and future perspectives.” Expert opinion on drug discovery 16, no. 9 (2021): 949-959. tandfonline.com

Back, Seoin, Alán Aspuru-Guzik, Michele Ceriotti, Ganna Gryn’ova, Bartosz Grzybowski, Geun Ho Gu, Jason Hein et al. “Accelerated chemical science with AI.” Digital Discovery 3, no. 1 (2024): 23-33. rsc.org

Rial, R. C. “AI in analytical chemistry: Advancements, challenges, and future directions.” Talanta (2024). [HTML]

Chen, Z., Chen, C., Yang, G., He, X., Chi, X., Zeng, Z., and Chen, X. “Research integrity in the era of artificial intelligence: Challenges and responses.” Medicine (2024). lww.com

Ananikov, V. P. “Artificial Intelligence Chemistry.” Artificial Intelligence (2024). researchgate.net

Scroll to Top