Abstract
Large language models (LLMs) are an exciting breakthrough in the rapidly growing field of artificial intelligence (AI), offering unparalleled potential in a variety of application domains such as finance, business, healthcare, cybersecurity, and so on. However, concerns regarding their trustworthiness and ethical implications have become increasingly prominent as these models are considered black-box and continue to progress. This position paper explores the potentiality of LLM from diverse perspectives as well as the associated risk factors with awareness. Towards this, we highlight not only the technical challenges but also the ethical implications and societal impacts associated with LLM deployment emphasizing fairness, transparency, explainability, trust and accountability. We conclude this paper by summarizing potential research scopes with direction. Overall, the purpose of this position paper is to contribute to the ongoing discussion of LLM potentiality and awareness from the perspective of trustworthiness and responsibility in AI.
RAS ID
71519
Document Type
Journal Article
Date of Publication
12-1-2024
Volume
4
Issue
1
School
Centre for Securing Digital Futures
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Publisher
Springer
Recommended Citation
Sarker, I. H. (2024). LLM potentiality and awareness: A position paper from the perspective of trustworthy and responsible AI modeling. DOI: https://doi.org/10.1007/s44163-024-00129-0
Comments
Sarker, I. H. (2024). LLM potentiality and awareness: a position paper from the perspective of trustworthy and responsible AI modeling. Discover Artificial Intelligence, 4(1), 40. https://doi.org/10.1007/s44163-024-00129-0