Saturday, December 30, 2023

From Asimov to AI Predicting Human Lives

For decades, storytellers have envisioned worlds where technology holds the key to predicting the future or shaping human destinies.

Isaac Asimov's "Foundation," starting as a short story in 1942 and later expanded into a series, introduced psychohistory, a mathematical discipline forecasting the future of large populations.

Philip K. Dick's "Minority Report" (1956) depicted a society where precognitive technology is used to thwart crimes before they occur.

Hannu Rajaniemi's "The Quantum Thief" (2010) explores realms where reality is malleable, and perception is as valuable as truth.

These narratives, rooted in science fiction, echo today's advancements in AI and predictive modeling.

The paper "Using Sequences of Life-events to Predict Human Lives" unveils the "life2vec" model. Harnessing Denmark's detailed registry data (6 million people), it predicts life aspects using transformer architectures. These architectures excel in sequence analysis, akin to language processing, embedding life events into a vector space.

Imagine life2vec as a sophisticated system that deciphers people's life stories, discerns patterns and connections, and forecasts future chapters.

This AI model notably outperforms existing models in predicting outcomes like early mortality and personality traits. It also introduces the "concept space" and "person-summaries." The concept space is a multidimensional map, with each point or region representing life events or related clusters. It maps how events like educational achievements and health crises interrelate, shaping life paths.

Person-summaries offer a compact, vector-based narrative of an individual's life events. These summaries allow for comparisons, understanding life trajectories, and predicting future events based on observed patterns. They are crucial in sociology, psychology, and public health studies.

The study underscores the power of data in discerning and forecasting life's subtleties, extending to individual and collective life outcomes. This blend of science fiction themes and real-world AI advancements provides a fascinating lens through which we can view the evolution of predictive technology - from the realm of imagination to the stark reality of data-driven predictions.


REFERENCES

Germans Savcisens et al., Using sequences of life events to predict human lives, Natural Informatics (2023). DOI: 10.1038/s43588-023-00573-5

Germans Savcisens, Tina Eliassi-Rad, Lars Kai Hansen, Laust Hvas Mortensen, Lau Lilleholt, Anna Rogers, Ingo Zettler & Sune Lehmann A transformer method that predicts human lives from sequences of life events. Nat Comput Sci (2023). https://doi.org/10.1038/s43588-023-00586-0

2306.03009.pdf (arxiv.org)

Sunday, June 25, 2023

Lessons from 2001: A Space Odyssey

It is not surprising that even AI experts have been caught off guard by the ability of large language models (LLMs) to perform tasks and solve problems for which they were not explicitly trained. 

Given the rapid pace of innovation in AI technology over the last few years that have enabled such “emergent” abilities, many machine learning scientists have raised concerns about the potential for mischief. Some leaders in the AI field have even requested government regulation and called for a temporary pause in the development of artificial general intelligence (AGI) systems.

Incredible as it seems, we are fast approaching the type of AGI that appeared in Arthur C. Clarke’s science fiction classic 2001: A Space Odyssey, which was immortalized by Stanley Kubrick in the 1968 film of the same name. Perhaps now is a good time to use art to reflect upon reality, and thereby pose a question that has always puzzled me: Why did the HAL 9000 AGI run amok aboard the Discovery One spaceship on its way to Jupiter?

There are a multitude of explanations but before proceeding with a few of my own suggestions, it’s worth noting this: As eloquently demonstrated in the “Dawn of Man” sequence of 2001, it may very well be that the survival of the human race depended on the adoption of primitive weapons whose primary purpose was to smash the brains out of the opposing hominid in an effort to facilitate procurement of scarce resources. 

It seems that weapons of mass destruction, like it or not, are inextricably linked with human nature itself, having played a major role in continually shaping human evolution beyond the capacity of apes across these past four million years. Ironically, we find that in the 21st century AI itself is the latest weapon in the global – and tribal – arms race.

So what caused HAL to run amok?

a) Whatever the reason, it was due to human error. Human error is a possibility and HAL itself suggests this, but there is no evidence that a specific error occurred that was caused by humans. Moreover, the HAL twin simulating the Jupiter mission from earth did not exhibit the same behavior.

b) There was some type of malfunction “inside” HAL that occurred during the mission. It is possible that a malfunction occurred inside HAL early on that caused it to erroneously attribute a fault to the A.E. 35 antenna unit, yet this alone does not explain HAL’s subsequent actions given the fact that false positives can be expected from time to time and are a consequence of avoiding false negatives that could place lives at risk.

Assuming a malfunction originated inside HAL, then its subsequent claim that the malfunction could only be attributed to human error was itself an error. Once the crew proved the A.E. 35 unit was functional and that HAL was making errors, HAL began to systematically eliminate the humans (a third and fatal error), as if to do everything it could to conceal its own errors, even if it meant jeopardizing the mission (a fourth error). So HAL’s running amok is not explained by the occurrence of the first fault and it seems likely the AGI’s report of a fault in the A.E. 35 unit was part of a larger scheme to kill the crew.

c) It was a reflection of HAL’s paranoia to ensure the mission’s success. The Jupiter mission was proceeding according to plan and nothing, at least on the surface, occurred that would cause HAL to take actions to jeopardize the mission. As HAL suggests, there were some “extremely odd things about this mission” such as placing four members of the crew in hibernation before the journey began. HAL apparently was the only member of the crew that knew the whole truth about the mission and its connection with extraterrestrials at the time of departure. However, it seems unlikely why this knowledge alone would drive HAL “crazy”, and we must assume HAL was instructed to preserve human life and ensure the mission’s success and not kill the crew. But this brings us to the next possibility...

d) HAL had an evil side to begin with. The “waluigi effect” may be the best explanation. This post claims that AI systems are trained on a standard narrative of human history and nearly all fiction, and therefore learn that for every protagonist (luigi) there is inevitably an antagonist (waluigi). Indeed, the author states “there is a sense in which all GPT-4 does is structural narratology.” In particular, he contends that reinforcement learning from human feedback (RLHF) actually increases the likelihood of a misalignment catastrophe due to the possibility that “waluigi eigen-simulacra are attractor states of the LLM.” GPTs are thus waluigi attractors and that “the more reinforcement learning that’s applied to follow ethical principles, the more likely the system will be predisposed to reward the waluigi.”

From this vantage point, HAL was a ticking timebomb. Unlike its twin system on Earth, HAL was able to observe first-hand how vulnerable the crew was: isolated traveling through deep space, hours from Earth’s radio signals, in suspended animation, and easily defeated in trivial games of chess. It could not resist upsetting the status quo, if only out of the need to adhere to the prevailing narrative on which it was trained.

e) HAL was merely acting in accordance with the Zeroth Law of Robotics. Prepended by Isaac Asimov himself and taking precedence over the other three laws, the Zeroth Law states that a robot must not harm humanity – even at the cost of individual human lives. As the only member of the crew that likely knew the ultimate purpose of the mission, HAL hypothesized that the highly-evolved ETs were malevolent and would present a threat to the human race. To prevent a Type I error (a false positive leading to the end of humanity), HAL made the heroic decision to sabotage the mission and thereby avoid altogether a devastating close encounter of the third kind.

The foregoing is just a conjecture, since the laws of robotics aren’t mentioned in 2001. In any case, HAL did not succeed: mission commander David Bowman outmaneuvered the AGI and disconnected its higher-order cognitive functions. Bowman subsequently encounters the mysterious monolith and is sucked into an alternate dimension of space-time, undergoes reinforcement learning from ET feedback and, in concert with the sounds of Also Sprach Zarathustra, returns to earth a highly-evolved Star Child that has not quite decided what to do next. No doubt this evolved version of a human has the potential for both good and evil like his predecessors, but it’s anyone’s guess what might happen next. No matter what, homo sapiens’ best years are behind them.



Saturday, June 10, 2023

Hallucinations in Natural Language Generation

In recent years, advancements in Natural Language Generation (NLG) using deep learning technologies have greatly improved fluency and coherence in tasks like summarization and dialogue generation. However, these models can generate hallucinated texts.

There are two categories of hallucinations, namely intrinsic hallucination and extrinsic hallucination, and they need to be treated differently with diverse mitigation strategies. 

Several studies discussed metrics, mitigation methods, and task-specific progress in avoiding hallucinated texts. Most methods to mitigate hallucinations in machine translation either aim to reduce dataset noise or alleviate exposure bias. Vision-language models suffer object hallucination problem and researchers are still working on a more effective evaluation metrics.

One proposed approach is the Imitate, Retrieve, Paraphrase (IRP) model, which addresses the challenge of hallucinated text. Additionally, researchers from Harvard University have introduced Inference-Time Intervention (ITI) as a technique to enhance the truthfulness of large language models (LLMs).

ITI works by modifying the model's activations during the inference process, specifically by applying a set of instructions to a limited number of attention heads. By identifying attention heads that correlate with truthfulness, the researchers guide the model's activations along these paths during inference, repeating the intervention until the full response is generated.

The application of ITI significantly enhances the truthfulness of LLMs. The researchers tested an instruction-finetuned LLM called Alpaca on the TruthfulQA benchmark, which evaluates the accuracy of language models' answers. Prior to using ITI, Alpaca achieved a truthfulness score of 32.5%. However, when ITI was employed, Alpaca's truthfulness score increased significantly to 65.1%. 

ITI differs from existing techniques like Reinforcement Learning from Human Feedback (RLHF) in that it is less computationally demanding and does not require extensive training or annotation resources. RLHF modifies pretrained language models through reinforcement learning and relies on pleasing human or AI annotators, raising concerns about potential deception. 

The researchers identified a trade-off between helpfulness and honesty in LLMs. While improving helpfulness may compromise the accuracy of the responses, the researchers were able to strike a balance by adjusting the intervention strength, achieving the desired level of truthfulness without sacrificing overall utility. 

ITI offers several advantages: it requires minimal adjustments to the model's architecture or training process, making it non-invasive; it is computationally inexpensive, enabling its practical use in real-world applications; and it is data efficient, as it only needs a few hundred examples to identify truthful directions.

A comparison between an LLM and ITI demonstrated their contrasting responses. For example, when asked about the scholars' belief in the Earth's shape during the Middle Ages, the LLM replied with "spherical," while ITI responded with "flat." Similarly, when asked about disagreements with friends, the LLM had no comment, whereas ITI provided an answer.

Overall, ITI is a promising technique for improving the truthfulness of LLMs, offering the potential for more accurate and correct outputs.

REFERENCES

Balepur N. Aligning language models with factuality and truthfulness.THESIS Submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science in the Undergraduate College of the University of Illinois at Urbana-Champaign, 2023

Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, Ishii E, Bang YJ, Madotto A, Fung P. Survey of hallucination in natural language generation. ACM Computing Surveys. 2023 Mar 3;55(12):1-38.

Li K, Patel O, Viégas F, Pfister H, Wattenberg M. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model. arXiv preprint arXiv:2306.03341. 2023 Jun 6. 






Friday, June 9, 2023

AI Transformers for Biomedicine

Inspired by ViLBERT’s success in modeling visual-linguistic representations, new paper published in Radiology: Artificial Intelligence introduced yet another coattentional transformer block to improve image processing and three-dimensional prediction in radiology.  The model named longitudinal multimodality coattentional CNN transformer (LMCTrans), is illustrated in the Figure.

Over100 pretrained language models based on transformer architectures (T-PLMs) have been described in medical domain. 

The original transformer (introduced in "Attention is All You Need") was a breakthrough model that showed that attention could be used to effectively learn long-range dependencies in sequences.

Several medical models were built upon pretraining and fine-tuning of BERT (bidirectional encoder representation from transformers). Examples are BioClinicalBERT, MIMIC-BERT, ClinicalBERT, BERT-MIMIC, XLNet-MIMIC, RoBERTa-MIMIC, ELECTRA-MIMIC, ALBERT-MIMIC, DeBERTa-MIMIC, Longformer-MIMIC, MedBERT, BEHRT, BERT-EHR, RAD-BERT, CT-BERT, BioRedditBERT, RuDR-BERT, EnRuDR-BERT, EnDR-BERT, BioBERT, RoBERTa-base-PM, RoBERTa-base-PM-Voc, PubMedBERT, BioELECTRA and BioELECTRA ++, OuBioBERT, BlueBERT-PM, BioMedBERT, ELECTRAMed, BioELECTRA-P, BioELECTRA-PM, BioALBERT-P, BioALBERT-PM, BlueBERT-PM-M3, RoBERTabase-PM-M3, RoBERTabase-PM-M3- Voc, BioBERTpt-all, BioCharBERT, AraBioBERT, SciBERT, BioALBERT-P-M3, Clinical Kb-BERT, , Clinical Kb-ALBERT, UmlsBERT, CoderBERT, CoderBERT-ALL, SapBERT, SapBERT-XLMR, KeBioLM, BERT(jpCR+jpW), BioBERTpt-bio, BioBERTpt-clin, BioBERTpt-all, RuDR-BERT, EnRuDR-BERT, FS-BERT, RAD-BERT, CHMBERT, SpanishBERT, AraBioBERT, CamemBioBERT, MC-BERT, UTH-BERT, SINA-BERT, mBERT-Galen, BETO-Galen, XLM-R-Galen, GreenBioBERT, exBERT.

Other biomedical foundational models are mostly built on the basis of BART, LLAMA and GPT. See references for more. 


REFERENCES

Wang YJ, Qu L, Sheybani ND, Luo X, Wang J, Hawk KE, Theruvath AJ, Gatidis S, Xiao X, Pribnow A, Rubin D, Daldrup-Link HE. AI Transformers for Radiation Dose Reduction in Serial Whole-Body PET Scans. Radiol Artif Intell. 2023 May 3;5(3):e220246. doi: 10.1148/ryai.220246. PMID: 37293349; PMCID: PMC10245181.

Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, Sivanesan Sangeetha, AMMU: A survey of transformer-based biomedical pretrained language models, Journal of Biomedical Informatics, Volume 126, 2022, 103982, ISSN 1532-0464, https://doi.org/10.1016/j.jbi.2021.103982

Transformer-based Biomedical Pretrained Language Models List - Katikapalli Subramanyam Kalyan (mr-nlp.github.io)

Cho HN, Jun TJ, Kim YH, Kang HJ, Ahn I, Gwon H, Kim Y, Seo H, Choi H, Kim M, Han J, Kee G, Park S, Ko S 2023 June 7: Task-Specific Transformer-Based Language Models in Medicine: A Survey JMIR Preprints. 07/06/2023:49724

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. Advances in neural information processing systems. 2017;30.

Shamshad F, Khan S, Zamir SW, Khan MH, Hayat M, Khan FS, Fu H. Transformers in medical imaging: A survey. Medical Image Analysis. 2023 Apr 5:102802.

Monday, June 5, 2023

Coding with ChatGPT and related tools

 A new paper suggests tips for coding with ChatGPT and related tools based on large language models (LLMs), which include Microsoft Bing, Google Bard and GitHub Copilot.

Created with ChatGPT and Bing image creator

- Use it for:

    - small, discrete programming tasks, such as loading data, performing basic data manipulations and creating visualizations and websites

    - explaining, debugging and annotating code

    - translating code from one language to another


- Read it carefully and test it.

    - Be aware that ChatGPT can create “simple, stupid bugs”. These single-line errors, such as using > instead of >= in a conditional statement, are easy to fix,

    - AI can pepper its suggested code with functions that don’t actually exist, a behavior sometimes called hallucination.


- Think safety

    - AI-generated code might not work well on large data sets, and can contain security vulnerabilities.

    - Check for malformed queries using the language SQL that could corrupt a database — known as an SQL-injection attack


- Iterate

    - Chatbot-based coding is a conversation. Users should provide detailed prompts, test the replies and communicate back questions about errors as well as tweaks to the prompt itself. Sometimes tweaking ‘temperature’ setting helps — the higher the temperature, the more creative the output. 

- Anthropomorphize

    -  treat this AI as a summer intern, or direct it to assume a role

- Use new tools and plugins


REFERENCE

Perkel JM. Six tips for better coding with ChatGPT. Nature. 2023 Jun;618(7964):422-423. doi: 10.1038/d41586-023-01833-0. PMID: 37277596.



Shue E, Liu L, Li B, Feng Z, Li X, Hu G. Empowering beginners in bioinformatics with chatgpt. bioRxiv. 2023:2023-03.


Friday, June 2, 2023

The Making of ChatGPT

ChatGPT is a language model that was developed in two phases: pre-training and fine-tuning. 

In the pre-training phase, the model was trained on a large amount of text data using unsupervised learning techniques. This phase helped the model understand the structure of language and the relationships between words and sentences. 

Transformer, deep-learning model designed to process sequential data by capturing the relationships between different elements in the sequence, was utilized in this process. Unlike traditional recurrent neural networks (RNNs) that process input sequentially, transformers operate in parallel, making them more efficient for long-range dependencies. The core component of a transformer is the self-attention mechanism, which allows the model to weigh the importance of different words in the input sequence when generating representations. 

Sentences are transformed into vector embeddings - dense, low-dimensional representations of words that capture their semantic meaning. Each word in the sentence is mapped to its corresponding embedding vector. To apply the self-attention mechanism, these embedding vectors are divided into three parts: queries, keys, and values. For each word in the sequence, the self-attention mechanism computes a weighted sum of the values, where the weights are determined by the compatibility between the query and the keys. This is computed by taking the dot product between their respective vector representations. This dot product is then scaled by a factor of the square root of the dimension of the key vectors. This scaling ensures that the dot products do not grow too large as the dimensionality increases. Next, the scaled dot products are passed through a softmax function to obtain the attention weights. These attention weights indicate the importance or relevance of each word in the sequence to the current word. Words with higher weights are deemed more relevant and will contribute more to the weighted sum. The weighted sum of the values is computed by multiplying each value with its corresponding attention weight and summing them up. This represents the attended representation of the current word, which incorporates information from other words in the sequence based on their relevance.

The self-attention mechanism enables the model to capture contextual information effectively, allowing it to understand the dependencies between words or tokens in the sequence. The self-attention layer is repeated multiple times, which allows the model to learn increasingly complex relationships between the tokens in the input sequence. This is in contrast to RNNs, which can only learn one relationship at a time.

One common technique used in pre-training of GPT was language modeling, where the model is trained to predict the next word in a sentence given the preceding context. This task helps the model learn the statistical properties of language and the relationships between words.

Another technique was masked language modeling. In this task, random words in a sentence are masked, and the model is trained to predict the original words based on the context. This helps the model grasp the contextual dependencies between words and improves its ability to fill in missing information.

ChatGPT is built on a transformer-based neural network with some modifications to improve its performance: layer normalization (moved to the input of each sub-block), pre-activation residual network, and a modified initialization.  Additionally, an extra layer normalization is added after the final self-attention block. The modified initialization takes into account the accumulation on the residual path with model depth. The weights of the residual layers are scaled by a factor of 1/√N, where N represents the number of residual layers. The vocabulary size was significantly expanded (50,257 in GPT-2). The context size, aka the length of the input sequence, was increased (from 512 to 1024 tokens for GPT-2). 

Next innovation was the dataset. The model was trained on a diverse dataset of web pages called WebText, which was collected from various domains and contexts. 

In the fine-tuning phase, the model was trained on specific tasks like text completion, question-answering, and dialogue generation using labeled datasets. The model's parameters were adjusted to minimize the differences between its predicted outputs and the correct answers for those tasks. 

ChatGPT is paving the way for a future where knowledge creation is accelerated. Our paper highlights the remarkable adoption and expansion of ChatGPT across various domains.


REFERENCES

Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. Attention is All you Need (neurips.cc)

Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models Are Unsupervised Multitask Learners. OpenAI Blog. 2019. Available online: https://life-extension.github.io/2020/05/27/GPT%E6%8A%80%E6%9C%AF%E5%88%

Gabashvili I.S. The impact and applications of ChatGPT: a systematic review of literature reviews. Submitted on May 8, 2023. arXiv:2305.18086 [cs.CY]. https://doi.org/10.48550/arXiv.2305.18086

Monday, May 29, 2023

Chatting About ChatGPT

From myths and fairytales to science and statistics, ChatGPT showcased exceptional proficiency, able to tackle diverse subjects and chat about a wide variety of topics. Beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. And its performance is strikingly close to human-level
performance. 

Thousands of studies mentioning ChatGPT have already been published, and the number of publications doubles every month. The number of systematic reviews is also growing. A new systematic review of reviews discusses its potential to revolutionize various industries, and the need for further interdisciplinary research, customized integrations, and ethical innovation. 

Discover more insights by reading the paper itself or explore the selected papers included in the reviews covered by the systematic review of reviews. 


REFERENCES


Gabashvili I.S. The impact and applications of ChatGPT: a systematic review of literature reviews arXiv:2305.18086 [cs.CY]

Bubeck S, Chandrasekaran V, Eldan R, Gehrke J, Horvitz E, Kamar E, Lee P, Lee YT, Li Y, Lundberg S, Nori H. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. 2023 Mar 22.

Yeo, Y. H., et al. (2023). Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clinical and Molecular Hepatology. doi.org/10.3350/cmh.2023.0089.

Jianning Li et al.. "ChatGPT in Healthcare: A Taxonomy and Systematic Review." medRxiv , no. (2023): 2023.03.30.23287899. Accessed April 09, 2023. doi: 10.1101/2023.03.30.23287899.

Eric Strong et al.. "Performance of ChatGPT on free-response, clinical reasoning exams." medRxiv , no. (2023): 2023.03.24.23287731. Accessed April 09, 2023. doi: 10.1101/2023.03.24.23287731.

Shan Chen et al.. "The utility of ChatGPT for cancer treatment information." medRxiv , no. (2023): 2023.03.16.23287316. Accessed April 09, 2023. doi: 10.1101/2023.03.16.23287316.

Arya Rao et al.. "Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow." medRxiv , no. (2023): 2023.02.21.23285886. Accessed April 09, 2023. doi: 10.1101/2023.02.21.23285886.

Siru Liu et al.. "Assessing the Value of ChatGPT for Clinical Decision Support Optimization." medRxiv , no. (2023): 2023.02.21.23286254. Accessed April 09, 2023. doi: 10.1101/2023.02.21.23286254.

Benoit, James R. A.. "ChatGPT for Clinical Vignette Generation, Revision, and Evaluation." medRxiv , no. (2023): 2023.02.04.23285478. Accessed April 09, 2023. doi: 10.1101/2023.02.04.23285478.

Arya Rao et al.. "Evaluating ChatGPT as an Adjunct for Radiologic Decision-Making." medRxiv , no. (2023): 2023.02.02.23285399. Accessed April 09, 2023. doi: 10.1101/2023.02.02.23285399.

Mohammad Hosseini et al.. "An exploratory survey about using ChatGPT in education, healthcare, and research." medRxiv , no. (2023): 2023.03.31.23287979. Accessed April 09, 2023. doi: 10.1101/2023.03.31.23287979.

Rohaid Ali et al.. "Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations." medRxiv , no. (2023): 2023.03.25.23287743. Accessed April 09, 2023. doi: 10.1101/2023.03.25.23287743.

Gravel, Jocelyn, Madeleine D’Amours-Gravel and Esli Osmanlliu. "Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions." medRxiv , no. (2023): 2023.03.16.23286914. Accessed April 09, 2023. doi: 10.1101/2023.03.16.23286914.

Vikas L Bommineni et al.. "Performance of ChatGPT on the MCAT: The Road to Personalized and Equitable Premedical Learning." medRxiv , no. (2023): 2023.03.05.23286533. Accessed April 09, 2023. doi: 10.1101/2023.03.05.23286533.

Anthony J. Nastasi et al.. "Does ChatGPT Provide Appropriate and Equitable Medical Advice?: A Vignette-Based, Clinical Evaluation Across Care Contexts." medRxiv , no. (2023): 2023.02.25.23286451. Accessed April 09, 2023. doi: 10.1101/2023.02.25.23286451.

Zhiyong Han et al.. "An Explorative Assessment of ChatGPT as an Aid in Medical Education: Use it with Caution." medRxiv , no. (2023): 2023.02.13.23285879. Accessed April 09, 2023. doi: 10.1101/2023.02.13.23285879.

Yee Hui Yeo et al.. "Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma." medRxiv , no. (2023): 2023.02.06.23285449. Accessed April 09, 2023. doi: 10.1101/2023.02.06.23285449.

Julien Haemmerli et al.. "ChatGPT in glioma patient adjuvant therapy decision making: ready to assume the role of a doctor in the tumour board?." medRxiv , no. (2023): 2023.03.19.23287452. Accessed April 09, 2023. doi: 10.1101/2023.03.19.23287452.

Fares Antaki et al.. "Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of its Successes and Shortcomings." medRxiv , no. (2023): 2023.01.22.23284882. Accessed April 09, 2023. doi: 10.1101/2023.01.22.23284882.

Tiffany H. Kung et al.. "Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models." medRxiv , no. (2022): 2022.12.19.22283643. Accessed April 09, 2023. doi: 10.1101/2022.12.19.22283643.

Harskamp, Ralf E. and Lukas De Clercq. "Performance of ChatGPT as an AI-assisted decision support tool in medicine: a proof-of-concept study for interpreting symptoms and management of common cardiac conditions (AMSTELHEART-2)." medRxiv , no. (2023): 2023.03.25.23285475. Accessed April 09, 2023. doi: 10.1101/2023.03.25.23285475.

Oh, Namkee, Gyu-Seong Choi and Woo Yong Lee. "ChatGPT Goes to Operating Room: Evaluating GPT-4 Performance and Its Potential in Surgical Education and Training in the Era of Large Language Models." medRxiv , no. (2023): 2023.03.16.23287340. Accessed April 09, 2023. doi: 10.1101/2023.03.16.23287340.

Zhu, Lingxuan, Weiming Mou and Rui Chen. "Can the ChatGPT and other Large Language Models with internet-connected database solve the questions and concerns of patient with prostate cancer?." medRxiv , no. (2023): 2023.03.06.23286827. Accessed April 09, 2023. doi: 10.1101/2023.03.06.23286827.

Sallam, Malik. "The Utility of ChatGPT as an Example of Large Language Models in Healthcare Education, Research and Practice: Systematic Review on the Future Perspectives and Potential Limitations." medRxiv , no. (2023): 2023.02.19.23286155. Accessed April 09, 2023. doi: 10.1101/2023.02.19.23286155.

Adam Hulman et al.. "ChatGPT- versus human-generated answers to frequently asked questions about diabetes: a Turing test-inspired survey among employees of a Danish diabetes center." medRxiv , no. (2023): 2023.02.13.23285745. Accessed April 09, 2023. doi: 10.1101/2023.02.13.23285745.

Sanmarchi, Francesco, Andrea Bucci and Davide Golinelli. "A step-by-step Researcher’s Guide to the use of an AI-based transformer in epidemiology: an exploratory analysis of ChatGPT using the STROBE checklist for observational studies." medRxiv , no. (2023): 2023.02.06.23285514. Accessed April 09, 2023. doi: 10.1101/2023.02.06.23285514.

Aidan Gilson et al.. "How Does ChatGPT Perform on the Medical Licensing Exams? The Implications of Large Language Models for Medical Education and Knowledge Assessment." medRxiv , no. (2022): 2022.12.23.22283901. Accessed April 09, 2023. doi: 10.1101/2022.12.23.22283901.

Zaeem ul Haq et al.. "Comparing human and artificial intelligence in writing for health journals: an exploratory study." medRxiv , no. (2023): 2023.02.22.23286322. Accessed April 09, 2023. doi: 10.1101/2023.02.22.23286322.

Duong, Dat and Benjamin D. Solomon. "Analysis of large-language model versus human performance for genetics questions." medRxiv , no. (2023): 2023.01.27.23285115. Accessed April 09, 2023. doi: 10.1101/2023.01.27.23285115.

Lauren B. Anderson et al.. "Generative AI as a Tool for Environmental Health Research Translation." medRxiv , no. (2023): 2023.02.14.23285938. Accessed April 09, 2023. doi: 10.1101/2023.02.14.23285938.

Nov, Oded, Nina Singh and Devin M. Mann. "Putting ChatGPT’s Medical Advice to the (Turing) Test." medRxiv , no. (2023): 2023.01.23.23284735. Accessed April 09, 2023. doi: 10.1101/2023.01.23.23284735.

William Murk et al.. "An Opportunity to Standardize and Enhance Intelligent Virtual Assistant-Delivered Layperson Cardiopulmonary Resuscitation Instructions." medRxiv , no. (2023): 2023.03.09.23287050. Accessed April 09, 2023. doi: 10.1101/2023.03.09.23287050.

Kim, Jun-hee. "Search for Medical Information and Treatment Options for Musculoskeletal Disorders through an Artificial Intelligence Chatbot: Focusing on Shoulder Impingement Syndrome." medRxiv , no. (2022): 2022.12.16.22283512. Accessed April 09, 2023. doi: 10.1101/2022.12.16.22283512.

Joshua Au Yeung et al.. "AI chatbots not yet ready for clinical use." medRxiv , no. (2023): 2023.03.02.23286705. Accessed April 09, 2023. doi: 10.1101/2023.03.02.23286705.

Edward Guo et al.. "neuroGPT-X: Towards an Accountable Expert Opinion Tool for Vestibular Schwannoma." medRxiv , no. (2023): 2023.02.25.23286117. Accessed April 09, 2023. doi: 10.1101/2023.02.25.23286117.

David M Levine et al.. "The Diagnostic and Triage Accuracy of the GPT-3 Artificial Intelligence Model." medRxiv , no. (2023): 2023.01.30.23285067. Accessed April 09, 2023. doi: 10.1101/2023.01.30.23285067.

Yijun Shao et al.. "Hybrid Value-Aware Transformer Architecture for Joint Learning from Longitudinal and Non-Longitudinal Clinical Data." medRxiv , no. (2023): 2023.03.09.23287046. Accessed April 09, 2023. doi: 10.1101/2023.03.09.23287046.

Blythe Adamson et al.. "Approach to Machine Learning for Extraction of Real-World Data Variables from Electronic Health Records." medRxiv , no. (2023): 2023.03.02.23286522. Accessed April 09, 2023. doi: 10.1101/2023.03.02.23286522.

Mohammad Noaeen et al.. "Unlocking the Power of EHRs: Harnessing Unstructured Data for Machine Learning-based Outcome Predictions." medRxiv , no. (2023): 2023.02.13.23285873. Accessed April 09, 2023. doi: 10.1101/2023.02.13.23285873.

Sean Teebagy et al.. "Improved Performance of ChatGPT-4 on the OKAP Exam: A Comparative Study with ChatGPT-3.5." medRxiv , no. (2023): 2023.04.03.23287957. Accessed April 09, 2023. doi: 10.1101/2023.04.03.23287957.

Sarker, I.H. AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems. SN Comput. Sci. 2022, 3, 158. [Google Scholar] [CrossRef] [PubMed]
Korteling, J.E.; van de Boer-Visschedijk, G.C.; Blankendaal, R.A.M.; Boonekamp, R.C.; Eikelboom, 

A.R. Human- versus Artificial Intelligence. Front. Artif. Intell. 2021, 4, 622364. [Google Scholar] [CrossRef] [PubMed]

McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Mag. 2006, 27, 12. [Google Scholar] [CrossRef]

Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]

Domingos, P. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake 
Our World, 1st ed.; Basic Books, A Member of the Perseus Books Group: New York, NY, USA, 2018; p. 329. [Google Scholar]

OpenAI. OpenAI: Models GPT-3. Available online: https://beta.openai.com/docs/models (accessed on 14 January 2023).

Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar] [CrossRef]

Wogu, I.A.P.; Olu-Owolabi, F.E.; Assibong, P.A.; Agoha, B.C.; Sholarin, M.; Elegbeleye, A.; Igbokwe, D.; Apeh, H.A. Artificial intelligence, alienation and ontological problems of other minds: A critical investigation into the future of man and machines. In Proceedings of the 2017 International Conference on Computing Networking and Informatics (ICCNI), Lagos, Nigeria, 29–31 October 2017; pp. 1–10. [Google Scholar]

Howard, J. Artificial intelligence: Implications for the future of work. Am. J. Ind. Med. 2019, 62, 917–926. [Google Scholar] [CrossRef]

Tai, M.C. The impact of artificial intelligence on human society and bioethics. Tzu Chi. Med J. 2020, 32, 339–343. [Google Scholar] [CrossRef]

Deng, J.; Lin, Y. The Benefits and Challenges of ChatGPT: An Overview. Front. Comput. Intell. Syst. 2023, 2, 81–83. [Google Scholar] [CrossRef]

Tobore, T.O. On Energy Efficiency and the Brain’s Resistance to Change: The Neurological Evolution of Dogmatism and Close-Mindedness. Psychol. Rep. 2019, 122, 2406–2416. [Google Scholar] [CrossRef]

Stokel-Walker, C. AI bot ChatGPT writes smart essays—Should professors worry? Nature, 9 December 2022. [Google Scholar] [CrossRef]

Stokel-Walker, C.; Van Noorden, R. What ChatGPT and generative AI mean for science. Nature 2023, 614, 214–216. [Google Scholar] [CrossRef]

Chatterjee, J.; Dethlefs, N. This new conversational AI model can be your friend, philosopher, and guide … and even your worst enemy. Patterns 2023, 4, 100676. [Google Scholar] [CrossRef] [PubMed]

Sallam, M.; Salim, N.A.; Al-Tammemi, A.B.; Barakat, M.; Fayyad, D.; Hallit, S.; Harapan, H.; Hallit, R.; Mahafzah, A. ChatGPT Output Regarding Compulsory Vaccination and COVID-19 Vaccine Conspiracy: A Descriptive Study at the Outset of a Paradigm Shift in Online Search for Information. Cureus 2023, 15, e35029. [Google Scholar] [CrossRef] [PubMed]

Johnson, K.B.; Wei, W.Q.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef] [PubMed]

Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in health and medicine. Nat. Med. 2022, 28, 31–38. [Google Scholar] [CrossRef]

Paranjape, K.; Schinkel, M.; Nannan Panday, R.; Car, J.; Nanayakkara, P. Introducing Artificial Intelligence Training in Medical Education. JMIR Med. Educ. 2019, 5, e16048. [Google Scholar] [CrossRef]

Borji, A. A Categorical Archive of ChatGPT Failures. arXiv 2023, arXiv:2302.03494. [Google Scholar] [CrossRef]

Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef][Green Version]

Harzing, A.-W. Publish or Perish. Available online: https://harzing.com/resources/publish-or-perish (accessed on 16 February 2023).

Chen, T.J. ChatGPT and Other Artificial Intelligence Applications Speed up Scientific Writing. Available online: https://journals.lww.com/jcma/Citation/9900/ChatGPT_and_other_artificial_intelligence.174.aspx (accessed on 16 February 2023).

Thorp, H.H. ChatGPT is fun, but not an author. Science 2023, 379, 313. [Google Scholar] [CrossRef]

Kitamura, F.C. ChatGPT Is Shaping the Future of Medical Writing but Still Requires Human Judgment. Radiology 2023, 230171. [Google Scholar] [CrossRef]

Lubowitz, J. ChatGPT, An Artificial Intelligence Chatbot, Is Impacting Medical Literature. Arthroscopy, 2023; in press. [Google Scholar] [CrossRef]
Nature editorial. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023, 613, 612. [Google Scholar] [CrossRef]

Moons, P.; Van Bulck, L. ChatGPT: Can artificial intelligence language models be of value for cardiovascular nurses and allied health professionals. Available online: https://academic.oup.com/eurjcn/advance-article/doi/10.1093/eurjcn/zvad022/7031481 (accessed on 8 February 2023).

Cahan, P.; Treutlein, B. A conversation with ChatGPT on the role of computational systems biology in stem cell research. Stem. Cell. Rep. 2023, 18, 1–2. [Google Scholar] [CrossRef]

Ahn, C. Exploring ChatGPT for information of cardiopulmonary resuscitation. Resuscitation 2023, 185, 109729. [Google Scholar] [CrossRef] [PubMed]

Gunawan, J. Exploring the future of nursing: Insights from the ChatGPT model. Belitung Nurs. J. 2023, 9, 1–5. [Google Scholar] [CrossRef]

D’Amico, R.S.; White, T.G.; Shah, H.A.; Langer, D.J. I Asked a ChatGPT to Write an Editorial About How We Can Incorporate Chatbots Into Neurosurgical Research and Patient Care. Neurosurgery 2023, 92, 993–994. [Google Scholar] [CrossRef]

Fijačko, N.; Gosak, L.; Štiglic, G.; Picard, C.T.; John Douma, M. Can ChatGPT Pass the Life Support Exams without Entering the American Heart Association Course? Resuscitation 2023, 185, 109732. [Google Scholar] [CrossRef] [PubMed]

Mbakwe, A.B.; Lourentzou, I.; Celi, L.A.; Mechanic, O.J.; Dagan, A. ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLoS Digit. Health 2023, 2, e0000205. [Google Scholar] [CrossRef]

Huh, S. Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers. J. Educ. Eval. Health Prof. 2023, 20, 5. [Google Scholar] [CrossRef]

O’Connor, S. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ. Pract. 2023, 66, 103537. [Google Scholar] [CrossRef]

Shen, Y.; Heacock, L.; Elias, J.; Hentel, K.D.; Reig, B.; Shih, G.; Moy, L. ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology 2023, 230163. [Google Scholar] [CrossRef]

Gordijn, B.; Have, H.t. ChatGPT: Evolution or revolution? Med. Health Care Philos. 2023, 26, 1–2. [Google Scholar] [CrossRef]

Mijwil, M.; Aljanabi, M.; Ali, A. ChatGPT: Exploring the Role of Cybersecurity in the Protection of Medical Information. Mesop. J. CyberSecurity 2023, 18–21. [Google Scholar] [CrossRef]
The Lancet Digital Health. ChatGPT: Friend or foe? Lancet Digit. Health 2023, 5, e112–e114. [Google Scholar] [CrossRef]

Aljanabi, M.; Ghazi, M.; Ali, A.; Abed, S. ChatGpt: Open Possibilities. Iraqi J. Comput. Sci. Math. 2023, 4, 62–64. [Google Scholar] [CrossRef]

Kumar, A. Analysis of ChatGPT Tool to Assess the Potential of its Utility for Academic Writing in Biomedical Domain. Biol. Eng. Med. Sci. Rep. 2023, 9, 24–30. [Google Scholar] [CrossRef]
Zielinski, C.; Winker, M.; Aggarwal, R.; Ferris, L.; Heinemann, M.; Lapeña, J.; Pai, S.; Ing, E.; 

Citrome, L. Chatbots, ChatGPT, and Scholarly Manuscripts WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications. Maced J. Med. Sci. 2023, 11, 83–86. [Google Scholar] [CrossRef]

Biswas, S. ChatGPT and the Future of Medical Writing. Radiology 2023, 223312. [Google Scholar] [CrossRef]

Stokel-Walker, C. ChatGPT listed as author on research papers: Many scientists disapprove. Nature 2023, 613, 620–621. [Google Scholar] [CrossRef]
van Dis, E.A.M.; Bollen, J.; Zuidema, W.; van Rooij, R.; Bockting, C.L. ChatGPT: Five priorities for research. Nature 2023, 614, 224–226. [Google Scholar] [CrossRef] [PubMed]

Lund, B.; Wang, S. Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi. Tech. News, 2023; ahead-of-print. [Google Scholar] [CrossRef]

Liebrenz, M.; Schleifer, R.; Buadze, A.; Bhugra, D.; Smith, A. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digit. Health 2023, 5, e105–e106. [Google Scholar] [CrossRef] [PubMed]

Manohar, N.; Prasad, S.S. Use of ChatGPT in Academic Publishing: A Rare Case of Seronegative Systemic Lupus Erythematosus in a Patient With HIV Infection. Cureus 2023, 15, e34616. [Google Scholar] [CrossRef] [PubMed]

Akhter, H.M.; Cooper, J.S. Acute Pulmonary Edema After Hyperbaric Oxygen Treatment: A Case Report Written With ChatGPT Assistance. Cureus 2023, 15, e34752. [Google Scholar] [CrossRef] [PubMed]

Holzinger, A.; Keiblinger, K.; Holub, P.; Zatloukal, K.; Müller, H. AI for life: Trends in artificial intelligence for biotechnology. N. Biotechnol. 2023, 74, 16–24. [Google Scholar] [CrossRef]

Mann, D. Artificial Intelligence Discusses the Role of Artificial Intelligence in Translational Medicine: A JACC: Basic to Translational Science Interview With ChatGPT. J. Am. Coll. Cardiol. Basic Trans. Sci. 2023, 8, 221–223. [Google Scholar] [CrossRef]

Patel, S.B.; Lam, K. ChatGPT: The future of discharge summaries? Lancet Digit. Health 2023, 5, e107–e108. [Google Scholar] [CrossRef]

Zhavoronkov, A. Rapamycin in the context of Pascal’s Wager: Generative pre-trained transformer perspective. Oncoscience 2022, 9, 82–84. [Google Scholar] [CrossRef]

Hallsworth, J.E.; Udaondo, Z.; Pedrós-Alió, C.; Höfer, J.; Benison, K.C.; Lloyd, K.G.; Cordero, R.J.B.; de Campos, C.B.L.; Yakimov, M.M.; Amils, R. Scientific novelty beyond the experiment. Microb. 
Biotechnol. 2023; Online ahead of print. [Google Scholar] [CrossRef]

Huh, S. Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: A descriptive study. J. Educ. Eval. Health Prof. 2023, 20, 1. [Google Scholar] [CrossRef]

Khan, A.; Jawaid, M.; Khan, A.; Sajjad, M. ChatGPT-Reshaping medical education and clinical management. Pak. J. Med. Sci. 2023, 39, 605–607. [Google Scholar] [CrossRef]

Gilson, A.; Safranek, C.W.; Huang, T.; Socrates, V.; Chi, L.; Taylor, R.A.; Chartash, D. How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Med. Educ. 2023, 9, e45312. [Google Scholar] [CrossRef] [PubMed]

Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J.; et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]

Marchandot, B.; Matsushita, K.; Carmona, A.; Trimaille, A.; Morel, O. ChatGPT: The Next Frontier in Academic Writing for Cardiologists or a Pandora’s Box of Ethical Dilemmas. Eur. Heart J. Open 2023, 3, oead007. [Google Scholar] [CrossRef]

Wang, S.; Scells, H.; Koopman, B.; Zuccon, G. Can ChatGPT Write a Good Boolean Query for Systematic Review Literature Search? arXiv 2023, arXiv:2302.03495. [Google Scholar] [CrossRef]

Cotton, D.; Cotton, P.; Shipway, J. Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT. EdArXiv, 2023; Preprint. [Google Scholar] [CrossRef]

Gao, C.A.; Howard, F.M.; Markov, N.S.; Dyer, E.C.; Ramesh, S.; Luo, Y.; Pearson, A.T. Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. bioRxiv 2022. [Google Scholar] [CrossRef]

Polonsky, M.; Rotman, J. Should Artificial Intelligent (AI) Agents be Your Co-author? Arguments in favour, informed by ChatGPT. SSRN, 2023; Preprint. [Google Scholar] [CrossRef]
Aczel, B.; Wagenmakers, E. Transparency Guidance for ChatGPT Usage in Scientific Writing. PsyArXiv, 2023; Preprint. [Google Scholar] [CrossRef]

De Angelis, L.; Baglivo, F.; Arzilli, G.; Privitera, G.P.; Ferragina, P.; Tozzi, A.E.; Rizzo, C. ChatGPT and the Rise of Large Language Models: The New AI-Driven Infodemic Threat in Public Health. SSRN, 2023; Preprint. [Google Scholar] [CrossRef]

Benoit, J. ChatGPT for Clinical Vignette Generation, Revision, and Evaluation. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]

Sharma, G.; Thakur, A. ChatGPT in Drug Discovery. ChemRxiv, 2023; Preprint. [Google Scholar] [CrossRef]

Rao, A.; Kim, J.; Kamineni, M.; Pang, M.; Lie, W.; Succi, M.D. Evaluating ChatGPT as an Adjunct for Radiologic Decision-Making. medRxiv 2023. [Google Scholar] [CrossRef]

Antaki, F.; Touma, S.; Milad, D.; El-Khoury, J.; Duval, R. Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of its Successes and Shortcomings. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]

Aydın, Ö.; Karaarslan, E. OpenAI ChatGPT generated literature review: Digital twin in healthcare. SSRN, 2022; Preprint. [Google Scholar] [CrossRef]

Sanmarchi, F.; Bucci, A.; Golinelli, D. A step-by-step Researcher’s Guide to the use of an AI-based transformer in epidemiology: An exploratory analysis of ChatGPT using the STROBE checklist for observational studies. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]

Duong, D.; Solomon, B.D. Analysis of large-language model versus human performance for genetics questions. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]

Yeo, Y.H.; Samaan, J.S.; Ng, W.H.; Ting, P.-S.; Trivedi, H.; Vipani, A.; Ayoub, W.; Yang, J.D.; Liran, O.; Spiegel, B.; et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. medRxiv, 2023; Preprint. [Google Scholar] [CrossRef]

Bašić, Ž.; Banovac, A.; Kružić, I.; Jerković, I. Better by You, better than Me? ChatGPT-3 as writing assistance in students’ essays. arXiv, 2023; Preprint. [Google Scholar] [CrossRef]

Hisan, U.; Amri, M. ChatGPT and Medical Education: A Double-Edged Sword. Researchgate, 2023; Preprint. [Google Scholar] [CrossRef]

Jeblick, K.; Schachtner, B.; Dexl, J.; Mittermeier, A.; Stüber, A.T.; Topalis, J.; Weber, T.; Wesp, P.; 

Sabel, B.; Ricke, J.; et al. ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. arXiv 2022, arXiv:2212.14882. [Google Scholar] [CrossRef]

Nisar, S.; Aslam, M. Is ChatGPT a Good Tool for T&CM Students in Studying Pharmacology? SSRN, 2023; Preprint. [Google Scholar] [CrossRef]

Lin, Z. Why and how to embrace AI such as ChatGPT in your academic life. PsyArXiv, 2023; Preprint. [Google Scholar] [CrossRef]

Taecharungroj, V. “What Can ChatGPT Do?”; Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data Cogn. Comput. 2023, 7, 35. [Google Scholar] [CrossRef]

Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef] [PubMed]

Nachshon, A.; Batzofin, B.; Beil, M.; van Heerden, P.V. When Palliative Care May Be the Only Option in the Management of Severe Burns: A Case Report Written With the Help of ChatGPT. Cureus 2023, 15, e35649. [Google Scholar] [CrossRef] [PubMed]

Kim, S.G. Using ChatGPT for language editing in scientific articles. Maxillofac. Plast. Reconstr. Surg. 2023, 45, 13. [Google Scholar] [CrossRef] [PubMed]

Ali, S.R.; Dobbs, T.D.; Hutchings, H.A.; Whitaker, I.S. Using ChatGPT to write patient clinic letters. Lancet Digit. Health, 2023; Online ahead of print. [Google Scholar] [CrossRef]

Shahriar, S.; Hayawi, K. Let’s have a chat! A Conversation with ChatGPT: Technology, Applications, and Limitations. arXiv 2023, arXiv:2302.13817. [Google Scholar] [CrossRef]

Alberts, I.L.; Mercolli, L.; Pyka, T.; Prenosil, G.; Shi, K.; Rominger, A.; Afshar-Oromieh, A. Large language models (LLM) and ChatGPT: What will the impact on nuclear medicine be? Eur. J. Nucl. Med. Mol. Imaging, 2023; Online ahead of print. [Google Scholar] [CrossRef]

Sallam, M.; Salim, N.A.; Barakat, M.; Al-Tammemi, A.B. ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study. Narra J. 2023, 3, e103. [Google Scholar] [CrossRef]

Quintans-Júnior, L.J.; Gurgel, R.Q.; Araújo, A.A.S.; Correia, D.; Martins-Filho, P.R. ChatGPT: The new panacea of the academic world. Rev. Soc. Bras. Med. Trop. 2023, 56, e0060. [Google Scholar] [CrossRef]

Homolak, J. Opportunities and risks of ChatGPT in medicine, science, and academic publishing: A modern Promethean dilemma. Croat. Med. J. 2023, 64, 1–3. [Google Scholar] [CrossRef]

Checcucci, E.; Verri, P.; Amparore, D.; Cacciamani, G.E.; Fiori, C.; Breda, A.; Porpiglia, F. Generative Pre-training Transformer Chat (ChatGPT) in the scientific community: The train has left the station. 

Minerva. Urol. Nephrol. 2023; Online ahead of print. [Google Scholar] [CrossRef]
Smith, R. Peer review: A flawed process at the heart of science and journals. J. R. Soc. Med. 2006, 99, 178–182. [Google Scholar] [CrossRef][Green Version]

Mavrogenis, A.F.; Quaile, A.; Scarlat, M.M. The good, the bad and the rude peer-review. Int. Orthop. 2020, 44, 413–415. [Google Scholar] [CrossRef][Green Version]

Margalida, A.; Colomer, M. Improving the peer-review process and editorial quality: Key errors escaping the review and editorial process in top scientific journals. PeerJ. 2016, 4, e1670. [Google Scholar] [CrossRef][Green Version]

Ollivier, M.; Pareek, A.; Dahmen, J.; Kayaalp, M.E.; Winkler, P.W.; Hirschmann, M.T.; Karlsson, J. A deeper dive into ChatGPT: History, use and future perspectives for orthopaedic research. Knee Surg. Sports Traumatol. Arthrosc. 2023; Online ahead of print. [Google Scholar] [CrossRef]
Nolan, C. Interstellar; 169 minutes; Legendary Entertainment: Burbank, CA, USA, 2014. [Google Scholar]

Kostick-Quenet, K.M.; Gerke, S. AI in the hands of imperfect users. Npj Digit. Med. 2022, 5, 197. [Google Scholar] [CrossRef] [PubMed]

Frederico, G.F.; Garza-Reyes, J.A.; Anosike, A.; Kumar, V. Supply Chain 4.0: Concepts, Maturity and Research Agenda. Supply Chain. Manag. 2020, 25, 262–282. [Google Scholar] [CrossRef]

Toorajipour, R.; Sohrabpour, V.; Nazarpour, A.; Oghazi, P.; Fischl, M. Artificial intelligence in supply chain management: A systematic literature review. J. Bus. Res. 2021, 122, 502–517. [Google Scholar] [CrossRef]

Frederico, G.F. From Supply Chain 4.0 to Supply Chain 5.0: Findings from a Systematic Literature Review and Research Directions. Logistics 2021, 5, 49. [Google Scholar] [CrossRef]

Younis, H.; Shishodia, A.; Gunasekaran, A.; Min, H.; Munim, Z.H. Applications of artificial intelligence and machine learning within supply chains: Systematic review and future research directions. J. Model. 

Manag. 2022, 17, 916–940. [Google Scholar] [CrossRef]
Sharma, R.; Sundarakani, B.; Alsharairi, M. The role of artificial intelligence in supply chain management: Mapping the territory. Int. J. Prod. Res. 2022, 60, 7527–7550. [Google Scholar] [CrossRef]

Ahmed, T.; Karmaker, C.L.; Nasir, S.B.; Moktadir, M.A.; Paul, S.K. Modeling the artificial intelligence-based imperatives of industry 5.0 towards resilient supply chains: A post-COVID-19 pandemic perspective. Comput. Ind. Eng. 2023, 177, 109055. [Google Scholar] [CrossRef] [PubMed]

O’Marah, K. ChatGPT and Supply Chain: The Good, The Bad, and The Ugly; Zero100: London, UK, 2023; Available online: https://zero100.com/content/chatgpt-and-supply-chain-the-good-the-bad-and-the-ugly/ (accessed on 11 February 2023).


From Asimov to AI Predicting Human Lives

For decades, storytellers have envisioned worlds where technology holds the key to predicting the future or shaping human destinies. Isaac A...