I don’t actually use LinkedIn that often and when I sign in to my account, I’m always surprised about how often people are active there; posting content to their timeline, liking content, and probably (hopefully!) messaging potential new hiring managers about possible new job openings.
Over the past week, I have logged into LinkedIn to see trending posts from existing and past colleagues which read like the typically enthusiastic posts that you’d see on your LinkedIn timeline. They’re celebrations of work anniversaries; look how much I have accomplished and how happy I am to work for X company.
Two of these posts in particular stood out as being popular and I took the time to drop them a reaction (a good thumbs up, heart, or clapping emoji) and then leave a comment, congratulating them.
And then the story unfolds.
This post is about LinkedIn and the ethics of using the latest developments in Artificial Intelligence (AI) technology, specifically talking about ChatGPT from OpenAI. Both of these companies are companies that Microsoft (my employer) own or have heavily invested in.
I’m probably obliged to say that this is not any statement on how these companies are using the technology, but rather an observation about how people are using the technology that these companies create.
If you’re reading this, Satya Nadella, please don’t send your security team to leave a box of my belongings on my desk. Cool. Thanks.
Fool me once…
Below you’ll see a screenshot of the first post I encountered. I read it and nearly passed it by because I had no intent to interact with anything on LinkedIn other than to message one specific person. But, as if often the case, I got distracted.
I liked the post and clicked through to see what people were commenting.
On second glance, the post seemed a little peculiar. When crafting my reply, I considered; “Are those really words that author would say?”. I wasn’t so sure.
Their native language is not English, yet they used what I would consider quite uncommon phrases for native speakers, never mind for someone writing in a second language; “a leap of faith”, “loop-de-loop”, and “real eye-opener”.
I didn’t put too much thought into it, wrote a reply and moved on. But I did make sure that my reply included one of those uncommon phrases for a non-native speaker:
And shortly after crafting the reply, my phone pinged and I had a message from the original author of the LinkedIn post:
Did you realised (sic) it was a computer talking?
OpenAI is so great it can even write your LinkedIn posts.
Now, honestly, I didn’t fully realise it was a computer talking. Despite being a little uncertain about the use of language, I accepted the post on face value as being genuine and moved on with my life. I didn’t take the time to analyse it; I simply consumed.
But, I had reacted and responded to that post. Despite a whiff of sarcasm – my reply was sincere and genuine. It was crafted using my brain and emotions. But now I know the original post wasn’t.
“Fool me once, shame on you”
I’d fallen for the AI-generated post and misconstrued it for being a real, genuine, heartfelt message.
(Yet, I’m still not sure anything that anyone posts on LinkedIn is 100% genuine. It’s often done for some sort of corporate-social networking clout.)
The author of the LinkedIn post admitted to me that they were experimenting with ChatGPT – the latest AI technology from OpenAI, which has been trained to generate human-sounding text. And, they had thought that by quoting their original post, that their research had been rumbled!
Fool me twice…
And so, the following day my phone beeps again. It’s another message from the person that ‘wrote’ the LinkedIn post above. They had sent me a screenshot from LinkedIn of a second post authored by a different person and that I had commented on.
This is ChatGTP (sic)
Ahhh people who use it can recognise this so easy
And so, I revisited the post and I genuinely couldn’t tell that it was written by AI. That’s mostly because this is an ex-colleague that I haven’t spoken to in years. I recall their English being good, but to what standard? I don’t recall.
On a reassessment of the post, there could be a few signs as to it being authored by a machine – strange capitalisation of a word and the over-enthusiastic tone. But of course, the real test would be to ask them. So I did.
Me: Did you use AI to write this
Me: Weird question I know, but one of my friends has been posting LinkedIn messages generated by AI. This came up in their feed and they were like “This is AI written!!”
Me: Great that you were honest about it.
Them: Nothing to hide, writing is not one of my skills. But use of tech is.
Me: Can I blur out the name, photo (all the personal details) and use a screenshot of your post in a blog post I’m writing about ChatGPT?
And so that is why the second example you see above is a dummy LinkedIn post, with blurred out cat images. It’s for illustrative purposes.
But, the conversation? 100% real. And the flip from “Nothing to hide” to the rejection of sharing was particularly surprising.
“Fool me twice, shame on me”
All of this really brings me to a question of ethics and trust. You fool me once and maybe I learn not to trust that one source. You fool me twice, and maybe I only have myself to blame and become a little less trusting of everyone and everything. And is that healthy?
I think it’s safe to say at this point, after encountering and uncovering two examples of AI on LinkedIn “in the wild” that are posing as genuinely authored posts, I can say that I am personally not comfortable with using the technology in this manner and I will likely approach posts of this nature on LinkedIn with a large dose of scepticism and has degraded my trust in the platform as a whole.
Whilst AI-assisted text generation can be a great tool for good, it’s also – as these examples show – erode trust in people and products.
The posts in question currently total over 500 ‘reactions’ and over 50 comments from individual people. I am very curious to know what percentage of these 500 people knew it was AI they were reacting to, or even responding to. Neither of the posts explicitly credit this as AI-generated text.
Even if someone consumed the post and swiftly moved on with their life, as I initially did, that post then leaves an impression that you tie to the author. The character you build around that person in your head is going to be different when you find out that those words you presumed were from their brain, were in fact written by a machine.
I want to be making genuine, human-to-human interactions in real life and online. And, if I have to talk to a robot, I want it to be very clear that I am talking to a robot.
I’ve been reading and watching a lot about OpenAI technology over the past few weeks, but in real life scenarios it’s not at the front of my mind to actively search for AI patterns in text that is attributed to a human contribution.
From the interactions above, ChatGPT clearly is detectable by those who are aware of it, are familiar with it, and have used it to generate posts of their own.
You can come to your own conclusions. I’m not sure I’m quite ready, educated, or qualified enough to summarise this post in an eloquent way.
It’s just here, it exists and – like all the content on my blog – it was written by a human.
The header image for this post contains a render of two plastic figures using digital devices, which I stole from Brett Jordan on Unsplash.