I’ll start with a confession/disclaimer: I have dabbled in generative AI tools to produce shoddy images in the past, before I did more research and learnt just how bad it is for the environment, for actual artists, etc. In my previous school librarian job, I used a tool built into Canva to generate images for use on the library social media, something which I would never do now, because I have learnt more about the processes underpinning it. This, as with many things, is the main point of this post: if you do questionable things in the full knowledge of what exactly is involved, then you are in the wrong.
In my role as a health librarian, I have taken charge of our social media accounts, moving us from X/Twitter to Bluesky and developing our online presence. This is a facet of librarianship that I have long been interested in, and it’s great to be given a certain degree of freedom in how I present our library online.
We use our Bluesky largely to repost recent studies and articles related to paediatrics; because of this, we follow a large array of journal accounts. One such account is Nursing in Critical Care, a journal published by Wiley and indexed on PubMed - in short, a reputable journal.
I was scrolling through the feed one afternoon, looking for articles to repost, when an image caught my eye. AI images are easy to spot once you know what you’re looking for; this one instantly stood out as just being, somehow, wrong. The post in question is below.
This is one of those images where, the longer you look, the worse it gets (much like many of the images this journal posts). For a start, the stethoscope appears to be emerging from her neck, the lines on the weirdly placed heart in the window are wobbly, the pole the IV bag is hanging from is phasing in and out of existence, and there seems to be a piece of furniture dissolving into the wall under the window.
When I first happened across this image, I thought ‘huh’, mentally filed it away, and carried on scrolling. However, it happened again, and again, to the point where I felt I had no choice but to say something. The image that finally sent me over the edge was this one:
Where to start with this absolute mess? Why are there so many recycling bins overflowing with plastic bottles? Why are the arrows on said bins so warped? And how is this medical professional seemingly able to rotate her head and torso 180 degrees?
This was too much for me, and I felt compelled to say something. I chose to focus on the angle of environmental sustainability and question whether those behind this account were aware of generative AI’s toll on the environment - ironic, given the post in question. Because I am not an idiot, I let my manager know that I was doing this, and I used my own Bluesky account to reply to the above post. The resultant thread is below.
I wasn’t particularly impressed by this response, and I saw no point in continuing to engage in this way if the person behind these responses was willing to miss the point so blatantly (equating generative AI usage with social media usage in terms of environmental impact was an odd way to go). I still don’t see the issue with using stock images over these poorly made AI images, but anyway.









I decided to work out who, specifically, I was talking to. I’m not going to put their names here, but anyone who’s curious can find out for themselves if they scroll back far enough on the journal’s Bluesky page. Both are lecturers with PhDs; one posts regularly about her expertise in AI. How such highly educated people can be so irresponsible in their use of generative AI is completely puzzling to me - so I decided to email the journal, with some of the more shocking examples attached, to illustrate my point:
Hello,
I'm writing regarding my concerns over the content posted on your journal's Bluesky account - please do forward this email on to the relevant person/team if this is not the correct email address to be contacting.
I'm a healthcare librarian working within an NHS Trust, and as part of our library service's current awareness services, I use Bluesky to follow various journals. I've noticed on a number of occasions that your journal's Bluesky account uses AI-generated images - these are easily identifiable and, as my manager describes them, extremely distracting when the focus should be on the actual information shared in the written post. I have attached some of the more egregious examples.
Through my own Bluesky account, I politely challenged one such example on the basis of its poor quality and the message of the post, which was about sustainability and environmental concerns in the ICU. One issue I raised was the environmental impact of generative AI, particularly AI-generated images; there has been considerable research into the environmental impacts of this. When the end result is as poor as the examples on your journal's Bluesky feed, it does not seem worth the energy expended, and it somewhat damages the reputability of your journal. The response I got was evasive; this can be seen here, in the feed below the original post.
I would encourage the journal to have a discussion about whether using AI in this way, to produce such questionable results, is necessary. While I appreciate that social media is, to some extent, a visual medium, there are so many other methods to produce and incorporate images within your posts (and, as should be obvious, images may not always be necessary in every case). I'm all for using AI in healthcare when appropriate and well-considered, but I would argue that this is doing more damage than good to your journal's profile.
I look forward to hearing back from you.
I honestly didn’t expect to hear back, but 5 days later, I received the following:
Dear Dr. Beth Gilchrist,
Thank you for your email.
I would like to inform you that I have now forwarded your concern to the Editorial Team for an internal discussion.
Please do let me know if you have concerns or require further assistance.
At first, I noticed a definite lack of AI images on my Bluesky feed, and I foolishly dared to believe I’d had an impact; however, a couple of weeks later, they were back again in force. I’ve emailed back to ask about the results of the editorial team discussion (including a clarification that I am not a doctor!); if I get a response, I’ll put an update below.
Ultimately, AI has its uses - and what I’m advocating for here is responsible use of AI. Generative AI isn’t always bad, but an awful lot of people use it without thinking about what that entails - whose art were those images trained on? Whose writing? What are the environmental costs of generating images which look incredibly naff anyway? How does using AI slop impact the reputation of you or who you work for?
As a P.S., I was scrolling Bluesky just the other day and yet another bad AI image caught my eye - but when I scrolled up, I was dismayed to see that the culprit this time was another NHS library. Particularly bad when your fellow library professionals are getting involved.
Update 16/6/25: I’m admittedly over a month late with this. I can’t fault NICC’s responsiveness, because they did reply to my request for an update remarkably quickly. Unfortunately, this response was to let me know that they would be forwarding my email on to their social media editors - the people apparently behind the terrible AI posts. The subsequent response I received from one of these two people is as follows:
Thank you for taking the time to share your concerns regarding the use of AI-generated images on the journals' Bluesky account.
Following your query, the Editors and Social Media Associate Editors convened to carefully consider the matter. We are currently exploring the use of AI-generated imagery as a visual communication tool to accompany article summaries. This is part of a ten to twelve month pilot project, during which we will assess platform metrics, including engagement, reach, and follower growth, to determine whether these images are supporting wider visibility and understanding of critical care content in meaningful ways.
As one of the Social Media Associate Editors, I am currently undergoing dedicated training on the use of AI in research. This includes ethical considerations and different systems for image generation and design. One of our goals is to evolve beyond generic visuals and instead use AI tools to produce informative infographics that summarise key aspects of the articles in an accessible, reader-friendly way. While the Bluesky content is still in its early days, you can already see a shift toward this approach in the materials shared on our LinkedIn account, where we are trialling article-specific, generated infographics. We hope to bring assess what content our followers respond to over time.
Your feedback has played an important role in helping us reflect on these developments, and we remain committed to ensuring our content aligns with the standards and integrity of the journal. Thank you again for reaching out, and please do stay in touch.
I strongly suspect the individual who replied to me is the same person with whom I had the fruitless initial exchange on Bluesky. Her email did nothing to address some of the issues raised, namely the environmental impact of their use of AI or the ethics of using it so freely. I am also concerned about the systems used and what they are trained on to create the AI ‘visuals’ clogging my Bluesky feed - does this journal actually know if their images are being produced based on stolen artwork? As far as ‘supporting wider visibility’, I will not be reposting or positively interacting with any posts containing AI imagery, and I know many others feel the same way. The NICC Bluesky feed is, if anything, actively damaging the reputation of this otherwise reputable journal. I see absolutely no point in engaging further with this person as they seem hellbent on avoiding the actual point of this discussion.
Meanwhile, my manager has advised me to unfollow the NICC account on Bluesky for the sake of my sanity - I can only hope others follow suit. I’ve already spread the word about this particular journal’s Bluesky presence among other health librarians in the region, so perhaps at some point the message will finally get through.






