When I’m not knee-deep in algorithms or helping CEOs embrace digital transformation, I lace up my running shoes and hit the trails. Some of my friends think I am crazy for spending hours with no people around, surrounded only by koalas and snakes. But I beg to differ. There’s something about the crunch of gravel and the scent of eucalyptus that hits my reset button. But trail running isn’t just an escape; it’s also a community—I have met some of my best friends while scrambling up steep trails. And sometimes, tech and trails intersect in the most surprising ways.
A few weeks back, I crossed paths with a fellow trail enthusiast, and as it happened, we began chatting. Ordinarily, my runs are a tech-free sanctuary, but our conversation took an unexpected turn towards generative AI. My running mate, deeply involved in marketing strategy, casually mentioned using ChatGPT for market research. “How do you validate the data from ChatGPT?” I asked her. The response? “Why would I? It’s ChatGPT!” I hit a root and stumbled.
Ah, the classic tech bubble. While some of us sleep, breathe, and dream in algorithms, not everyone understands the ‘whys’ and ‘hows’ of using AI responsibly. Cue a half-hour trail-side seminar on why you shouldn’t take ChatGPT—or any AI model—at face value and how information in prompts needs to be guarded like a secret family recipe (we both loved the chat, mind you, it wasn’t a condescending lecture). We both finished the run not just happy to have clocked a few kilometres in the forest but also richer in our experience—she had some new ideas for her workflow, and I learned that as new technologies emerge, I need to continue my preaches.
This real-world encounter got me thinking about a past newsletter still picking up steam. You might remember my “10 ChatGPT Checks” piece, a practical guide to responsibly using AI. Incredibly, it’s remained timeless (well, for eight months, which is forever in AI years). Recently, Nature published guidelines that echo these ideas but with an academic twist.
So, inspired by this, let’s take the conversation a step further. I’ve distilled the Nature guidelines into snack-sized insights. Whether you’re a researcher, a business professional, or someone, these are your go-to checkpoints for responsible AI use.
Find the source paper in Nature: Bockting, C. L., Van Dis, E. A., Van Rooij, R., Zuidema, W., & Bollen, J. (2023). Living guidelines for generative AI — Why scientists must oversee its use. Nature, 622(7984), 693-696. https://doi.org/10.1038/d41586-023-03266-1
Researchers, Reviewers, and Editors
1. Human Responsibility
Is a human overseeing the final research output?
Given that generative AI can’t guarantee the truthfulness or traceability of content, human oversight is essential for steps like data interpretation, manuscript writing, peer review, research gap identification, and hypothesis development.
2. AI Acknowledgment
Are you disclosing your use of generative AI in publications?
Always specify which tasks in your research involved generative AI when presenting or publishing.
3. Tool Transparency
Did you specify which generative AI tool and version you used?
It’s essential to detail the AI tool and its version used in your research for transparency.
4. Open Science Commitment
Have you preregistered your use of generative AI?
Adhering to open-science principles, make sure to preregister the prompts you’ll use and make the AI tool’s input and output publicly available with your publication.
5. Replication Factor
Are you open to replicating your findings with a different AI tool?
If your work relies heavily on a generative AI tool, consider confirming your results using a different generative AI tool where applicable.
Scientific Journals
6. Editorial Disclosure
Is the journal acknowledging its use of generative AI for peer review or selection?
Transparency regarding the use of generative AI tools in the editorial process is vital.
7. Reviewer Clarification
Are reviewers indicating their level of reliance on generative AI?
Journals should ask reviewers to specify how much they used generative AI in their reviews.
LLM Developers and Companies
8. Pre-Launch Transparency
Are all details about training data and algorithms shared with an independent auditor?
Prior to public launch, all relevant data should be shared with an independent scientific organization for auditing purposes.
9. Ongoing Sharing
Are adaptations and algorithms continuously shared with an independent auditing body?
Keep the independent auditing body in the loop regarding updates, adaptations, and algorithm changes.
10. Public Reporting Portal
Is there a platform for users to report biased or inaccurate content?
A portal should be established for this purpose, and the independent auditing body should have full access to it.
Research Funding Organizations
11. Policy Adherence
Are your research policies aligned with these guidelines?
Research integrity policies should be compliant with these living guidelines.
12. Human Involvement in Funding
Are humans actively evaluating research funding proposals?
Avoid sole reliance on generative AI tools for evaluations; human assessment should be involved.
13. AI Use Disclosure
Is the use of generative AI tools for evaluating research proposals acknowledged?
Research funding organizations should be transparent about how they are using generative AI in their evaluation process.
Let’s not underestimate the power of AI—but let’s not overestimate it either. As tech professionals, or simply as tech users, we need to be savvy about when, where, and how we deploy these tools. Remember, we’re still lacing up the running shoes, choosing the trails, and taking those leaps—literal or digital.
Catch you on the next run or in the next newsletter!