Call for Papers: Special Issue on AI-IA Nexus

Artificial Intelligence-Information Assurance Nexus: The Future of Information Systems Security, Privacy, and Quality

A PDF of this call can be found here

Special Issue Editors:

Rui Chen, Iowa State University
Juan Feng, Tsinghua University
Miguel Godinho de Matos, Católica Lisbon School of Business & Economics
Carol Hsu, University of Sydney
H. Raghav Rao, University of Texas at San Antonio

Email: Misq.aiia.si@gmail.com 

Abstract Proposal Deadline: January 20, 2025

 

Stage 1 Submission Deadline: October 31, 2025

Motivations for the Special Issue

Digital threats continue to impede information assurance. Many issues in information assurance have arisen in the last decade or two, including risk management, information quality, intellectual property, privacy protection, compliance with regulations, and continuity of operations (Mahmood et al., 2010; Forrest, 2023). As a result, protecting information has become a global priority, and collaborative efforts are being made to prevent, detect, and react to threats to information quality, authenticity, integrity, confidentiality, and availability (European Parliament, 2018; White House, 2023a). As society steps into the age of generative AI (GenAI) (Dennehy et. al., 2023), fresh challenges and opportunities are arising in the realms of information security, privacy, and quality. Questions have emerged regarding the role and intended/unintended consequences of GenAI in information assurance. GenAI is believed to pose a paradox, serving as a dual-edged sword in the realm of information assurance (Robidoux, 2024). 

GenAI creates new content, whereas traditional AI mostly makes predictions and classifications based on existing datasets. GenAI is designed to reason and operate independently across various domains, whereas traditional AI focuses on narrow tasks (e.g., playing chess and translating languages by following specific rules). In addition, GenAI works with multiple data modalities (e.g., text, images, and videos), whereas traditional AI primarily functions in a single mode of data. These new capabilities of GenAI open new possibilities for its applications in a wide range of areas. GenAI models can range from generalized models to domain-specific models that automate tasks and generate content adhering to industry-specific terminologies, context-specialized knowledge, and tailored experiences. Its power has sparked discussions on ethics and societal questions regarding the potential impact on employment, bias, privacy, and human-AI relationships. 

The emergence of GenAI is poised to exert a profound impact on assurance (Barrett et al., 2023; Sun et al., 2023). On the one hand, GenAI has been recognized for its ability to bolster information assurance. The IBM Institute for Business Value (2024) commented that GenAI has the potential to strengthen business defenses, accelerate security processes, and identify emerging threats as they arise. Studies have also noted that GenAI may be able to address information management challenges, including quality (Bhatti, 2024). On the other hand, GenAI heightens the potency of existing threats, allows the fabrication of false information, fuels intellectual property theft, and poses challenges to governance and compliance. The 2024 February deepfake fraud incident in Hong Kong is a case in point (Chen & Magramo, 2024). Even unexpected users can threaten the protection of data, which is manifested by examples of employees sharing confidential data with GenAI models. Industry reports from Forrester (2023), Cisco (2024), and the IBM Institute for Business Value (2024) have highlighted GenAI-induced risks to information assurance as a major threat to firms’ adoption and implementation of GenAI initiatives. Similar concerns have been acknowledged from a government perspective in the 2023 U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (White House, 2023b) and the 2023 congressional research report: Generative Artificial Intelligence and Data Privacy: A Primer (Congressional Research Service, 2023).

Examples of how GenAI may exacerbate information assurance issues include:

  1. Sophisticated human-deception attacks: Cybercriminals can use GenAI to compromise businesses by writing targeted emails. In addition, GenAI can create deepfakes and voice clones, resulting in “vishing” that uses phone calls and voice messages to trick people into sharing sensitive information. 
  2. Hallucination and confabulation: GenAI is known to create incorrect information that is seemingly correct. In addition, attackers may trick GenAI into recommending unverified software code packages to unexpected users. When attackers embed malicious codes into packages endorsed by GenAI, users may unwittingly download and utilize these harmful codes, creating security vulnerabilities ripe for exploitation.
  3. Intellectual property theft: GenAI can create materials that violate intellectual property rights. For example, GenAI can create content that resembles existing copyrighted materials, resulting in legal ramifications and conflicts.
  4. Challenges in regulation and compliance: Keeping GenAI models in check is subject to considerable hurdles because of how intricate and swiftly they evolve. Ensuring compliance with data protection laws and standards will become increasingly difficult as GenAI becomes more autonomous and capable of making independent decisions.

Another source of threats to information assurance stems from attacks that are designed to target the way GenAI systems are trained and expected to be used. Many of these attacks can be mitigated by explicitly integrating information assurance considerations when designing GenAI systems. For example, GenAI tools may be subject to:

  1. Unreliable training data: A substantial amount of training data is employed in constructing large language models (LLMs). Yet this data may be of low quality and is often unverified. The security of the models is influenced by the quality of the training data, paving the way for potential vulnerabilities, unauthorized access, and compromises regarding sensitive information.
  2. Data poisoning: GenAI needs to be trained and tuned on inputs and outputs. Data poisoning occurs when inputs are manipulated to alter or corrupt the training data in LLMs, which consequently impacts the desired outputs of the overall system.
  3. Security leaks, inference attacks, and knowledge phishing: Security leaks in the context of GenAI refer to the unintended disclosure of sensitive information embedded within a model itself or through its responses. This is also known as inference attacks or knowledge phishing.
  4. Prompt injections: Prompt injections occur when malicious inputs are provided to an AI system to manipulate its output or to execute unintended actions. 

Cisco (2024) found that 92% of organizations “see GenAI as fundamentally different, requiring new techniques to manage data and risks.” The 2023 U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence calls for actions on refining GenAI by mitigating information assurance issues (White House, 2023b). Worldwide efforts are being made on these fronts to protect LLMs against threats of information fabrication, system misuse, privacy breaches, etc. Gartner recommends mitigation strategies, which include “establishing a governance entity and workflow, monitoring and blocking access, communicating acceptable use policies, exploring prompt engineering and API integrations, and prioritizing private hosting options” (Robidoux, 2024). However, there are growing concerns that excessive focus and regulation on data security and privacy may stifle and slow the advancement of GenAI, especially in terms of the European Union’s AI Act (Timis, 2023).

The relationship between GenAI and information assurance is depicted below.

Potential Topics

Tremendous new opportunities exist for information systems (IS) scholars to study information assurance issues within the context of GenAI, as traditional approaches may not work. The 2023 U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (White House, 2023b) calls on academia to join forces with the government, the private sector, and civil society to create a society-wide effort. Likewise, the World Economic Forum (2023) calls for researchers to collaborate with AI developers, standard-setting bodies, and policymakers to develop an all-stakeholder approach. In sharp contrast to the substantial attention that the government and the private sectors have paid to security and privacy in relation to GenAI, IS research in this area is limited (Berente, et. al., 2021; Lee et al., 2023; Samtani et al., 2023). 

Potential topics for this special issue include but are not limited to the following: What factors influence individuals’ security and privacy behavior in the presence of GenAI tools? How can we predict, analyze, and counteract the emerging threats to GenAI models? How can economic analysis contribute to combating information assurance threats in GenAI? What are the managerial strategies and their effectiveness in addressing GenAI-induced issues on data security and privacy? What are the key principles in attributing accountability and responsibilities for the risks in GenAI model output? In addition to these topics, submissions focusing on AI research related to addressing emerging information assurance issues are also encouraged for the special issue.

Sample topics include (1) individual behaviors, (2) organizational practices, (3) societal impacts, (4) risk management, (5) investments in assurance, (6) market effects, and (7) attacker analysis (Hui et al., 2016; Hoehle et al., 2022). These topics align with MIS Quarterly’s goal of “moving beyond management and use of technology to also include societal and policy impacts” (Brown, 2024, p. i). This special issue seeks research that goes beyond simple applications of existing theories and methods from the cybersecurity literature in IS. We invite studies that explore the unique information assurance challenges in the realm of GenAI, calling for the development and application of new theories or methods. 

By focusing on important research questions, this special issue will generate answers to address the NSF 10 Big Ideas (e.g., Future of Work and Harnessing the Data Revolution), NSF Focus Areas (e.g., Artificial Intelligence and Cybersecurity), National Academy of Engineering 14 Grand Challenges (e.g., Secure Cyberspace), and E.U. Strategies Priorities 2021-2027 (e.g., Cybersecurity). It also aligns with the national focus on exploring the intersection between AI and security (National Science Foundation, 2024). This special issue also connects with IS research streams, such as the Bright Internet. Finally, this special issue builds on the special issue by Mahmood et al. (2010) to address the new challenges posed by GenAI, including the bidirectional relationships between GenAI and information assurance.

Projected Timeline and Process

Timeline (subject to change)

Optional introductory workshops (virtual): informational sessions November 15, 2024
Two-page abstracts January 20, 2025
Feedback on abstracts March 31, 2025
Paper development workshop (virtual) May 10, 2025
First-round paper submission October 31, 2025
First-round decisions January 31, 2026
Workshop for authors with first-round revise-and-resubmit February, 2026
Second-round revisions due May 31, 2026
Second-round decisions August 31, 2026
Final revisions due November 30, 2026
Final revisions February 28, 2027

Process

  • The special issue guest editors will organize optional virtual workshops to describe the aims and processes of this special issue and answer questions. This will ensure that prospective authors in different time zones can attend.
  • Although optional, authors are strongly encouraged to submit two-page abstracts by January 20, 2025, so that the guest editors can evaluate the fit of the papers with the special issue. Please email them to: misq.aiia.si@gmail.com
  • Editors will offer feedback on abstracts by March 31, 2025, encouraging authors of promising abstracts to submit five-page extended abstracts for a virtual workshop scheduled on May 10, 2025. These five-page abstracts will not be formally reviewed prior to the workshop—they are simply meant to facilitate a constructive discussion at the event.
  • Authors must submit manuscripts through MIS Quarterly’s ScholarOne submission site located at https://mc.manuscriptcentral.com/misq by October 31, 2025.
  • The editorial team will screen all submitted manuscripts. Only manuscripts deemed to have a reasonable chance of acceptance in an accelerated review process will remain under consideration.
  • Manuscripts that pass the initial screening will undergo no more than two rounds of review. Manuscripts not accepted by the end of the second round will be rejected. The guest editors will make the final decisions based on the associate editors’ reports and review feedback.
  • Authors must adhere to a strict schedule for submission and revision of manuscripts. Authors may submit rejected papers as regular submissions to MIS Quarterly only if the special issue rejection letter recommends such an action. The guest editors will recommend submission as a regular MIS Quarterly article only in special circumstances, such as when a formally reviewed manuscript had a strong likelihood of acceptance but required revisions that, while perceived to be feasible, were unlikely to be accomplished within the special issue’s accelerated review schedule.

Estimation 

        We expect the number of proposal submissions to be in the range of 80-100. 

Associate Editors

Panagiotis Adamopoulos
Emory University
Jingjing Li
University of Virginia
Rodrigo Belo
Nova School of Business and Economics
Huigang Liang
University of Memphis
Indranil Bose
NEOMA
Alexander Maedche
Karlsruhe Institute of Technology
Lemuria Carter
University of Sydney
Ning Nan
University of British Columbia
Christy Cheung
Hong Kong Baptist University
Jella Pfeiffer
University of Stuttgart
Rahul De’
Indian Institute of Management Bangalore
Dandan Qiao
National University of Singapore
Amany Elbanna
University of Sussex
Sagar Samtani
Indiana University
Uri Gal
University of Sydney
Anastasia Sergeeva
Vrije Universiteit Amsterdam
Weiyin Hong
Hong Kong University of Science and Technology
Maha Shaikh
ESADE Business School
Nina Huang
University of Miami
Paolo Spagnoletti
Luiss Business School
Hartmut Höhle
University of Mannheim
Rohit Valecha
University of Texas at San Antonio
Allen Johnston
University of Alabama
Jing Wang
Hong Kong University of Science and Technology
Arpan Kar
Indian Institute of Technology
Jingguo Wang
University of Texas at Arlington
Juhee Kwon
City University of Hong Kong
Hong Xu
Hong Kong University of Science and Technology
Atanu Lahiri
University of Texas at Dallas
Heng Xu
University of Florida
Alvin Leung
City University of Hong Kong
Niam Yaraghi
University of Miami
Ting Li
Erasmus University
Cathy Liu Yang
HEC Paris
  Yingjie Zhang
Peking University

Acknowledgments

The senior editors of this special issue thank the editor-in-chief, Susan Brown, the senior editor, Balaji Padmanabhan, and an anonymous associate editor for detailed comments on the call-for-papers. 

References

Barrett, C., Boyd, B., Bursztein, E., Carlini, N., Chen, B., Choi, J., Chowdhury, A. R., Christodorescu, M., Datta, A., Feizi, S., Fisher, K., Hashimoto, T., Hendrycks, D., Jha, S., Kang, D., Kerschbaum, F., Mitchell, E., Mitchell, J., Ramzan, Z., Shams, K., Song, D., Taly, A., & Yang, D. (2023). Identifying and mitigating the security risks of generative AI, foundations and trends in privacy and security. Foundations and Trends® in Privacy and Security, 6(1), 1-52. 

Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS Quarterly, 45(3), 1433-1450.

Bhatti, M. B. (2024). Can companies utilize GenAI to improve data quality? LinkedIn. https://www.linkedin.com/pulse/can-companies-utilize-genai-improve-data-quality-muhammad-b-bhatti-cswne/ 

Brown, S. (2024). A community-based view of MIS Quarterly, MIS Quarterly, 48(1), pp. i-vi.

Cisco. (2024). Cisco 2024 data privacy benchmark study. https://www.cisco.com/c/en/us/about/trust-center/data-privacy-benchmark-study.html  

Chen, H., & Magramo, K. (2024). Finance worker pays out $25 million after video call with deepfake “chief financial officer.” CNN. https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html  

Congressional Research Service. (2023). Generative artificial intelligence and data privacy: A primer. https://crsreports.congress.gov/product/pdf/R/R47569 

Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y.K, Mäntymäki, M., Pappas, I.O. (2023). Artificial intelligence (AI) and information systems: Perspectives to responsible AI. Information Systems Frontiers, 25, 1-7.

Dhar, J. (2023). People strategy in the age of generative AI. Boston Consulting Group. https://www.bcg.com/publications/2023/people-strategy-for-digital-age-of-ai 

Ene, C. (2023). 10.5 trillion reasons why we need a united response to cyber risk. Forbes. https://www.forbes.com/sites/forbestechcouncil/2023/02/22/105-trillion-reasons-why-we-need-a-united-response-to-cyber-risk/?sh=64a72fb33b0c 

European Parliament. (2018). General Data Protection Regulation. https://gdpr-info.eu/

Forrest, E. (2023). Latest research uncovers data quality as top concern for enterprises seeking to drive business agility through trusted data. Businesswire. https://www.businesswire.com/news/home/20230613161238/en/Latest-Research-Uncovers-Data-Quality-as-Top-Concern-for-Enterprises-Seeking-to-Drive-Business-Agility-through-Trusted-Data 

Ford, N., (2023). List of data breaches and cyber attacks in 2023. IT Governance. https://www.itgovernance.co.uk/blog/list-of-data-breaches-and-cyber-attacks-in-2023

Hoehle, H., Venkatesh, V., Brown S. A., Tepper, B., & Kude, T. (2022). Customer reactions to compensation as a data breach response: A mixed methods investigation. MIS Quarterly, 46(1), 299-340. 

Hui, K. L., Vance, A., & Zhdanov, D. (2016). Securing digital assets. MIS Quarterly Research Curations. https://www.misqresearchcurations.org/blog/2017/5/10/securing-digitalassets-1  

IBM Institute for Business Value. (2024). Generative AI amplifies risk and resilience. https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/ceo-generative-ai/cybersecurity?page=18 

Lee, J. K., Liu, S., & Lin, Z. (2023). Workshop on AI/ChatGPT and Bright Internet. Presented at the International Conference on Electronic Commerce, Xi’an, China, June 29-July 2, 2023.

Mahmood, A. M., Siponen, M., Straub, D.W., Rao, H. R., & Raghu, T. S. (2010). Moving toward black hat research in information systems security: An editorial introduction to the special issue, MIS Quarterly, 34(3), 431-433. 

National Science Foundation. (2024). NSF announces new AI test beds initiative to advance safety and security of AI technologies. https://new.nsf.gov/news/nsf-announces-new-ai-test-beds-initiative-advance 

Robidoux, M. (2024). GenAI: A dual-edged sword for cybersecurity. Dell Technologies. https://www.dell.com/en-us/blog/genai-a-dual-edged-sword-for-cybersecurity/ 

Samtani, S., Yang, S., Chen, H. (2023). The 3rd Workshop on Artificial Intelligence-Enabled Cybersecurity Analytics. Presented at the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, August 6-10, 2023, Long Beach, CA, U.S.A. 

Sun, H., Zhu, T., Zhang, Z., Jin, D., Xiong, P., Zhou, W. (2023). Adversarial attacks against deep generative models on data: A survey. IEEE Transactions on Knowledge and Data Engineering, 35(4). 3367-3388. 

Timis, D. A. (2023). How to regulate AI without stifling innovation. World Economic Forum. https://www.weforum.org/agenda/2023/06/how-to-regulate-ai-without-stifling-innovation/  

Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 7-30. 

White House. (2023a). Fact sheet: Biden-⁠Harris administration announces national cybersecurity strategy. https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/02/fact-sheet-biden-harris-administration-announces-national-cybersecurity-strategy/    

White House. (2023b). Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/  

World Economic Forum. (2023). The presidio recommendations on responsible generative AI (White paper). https://www.weforum.org/publications/the-presidio-recommendations-on-responsible-generative-ai/ 

 

Copyright © 2024 MISQ. All rights reserved.