Data Digest #5: Buzzfeed Quizzes and Angry Artists

Data Digest #5: Buzzfeed Quizzes and Angry Artists

February 1, 2023

The cogs of the data world are perpetually turning. Data never sleeps. Brace yourself for an exciting overview into some of the top data news stories that have been gracing our screens over the past month. 

The Guardian: Buzzfeed using AI for content and quizzes

If you were a pre-teen in the 2010s, chances are you’ll be all too familiar with the wonderful world of Buzzfeed quizzes. It was a universal experience: coming home from school and heading straight for the computer, where instead of doing homework, you’d wile away the hours doing quizzes to find out ‘what character from the Vampire Diaries’ or ‘what type of Converse trainer’ you are. It was the pinnacle of pre-teen procrastination.

Well, turns out Buzzfeed is moving with the times, as they’re now planning to incorporate artificial intelligence into their online quizzes and content. According to an internal memo sent out to employees, the company will begin using technology by OpenAI, the company that is also responsible for ChatGPT, the famous (and controversial) new chatbot tool that is taking the world by storm. Buzzfeed is the latest in a string of journalistic platforms to begin using artificial intelligence for content purposes. Whilst the technology is not perfect by any means (human editors are still required to scout for errors), is it only a matter of time before journalism as we know it changes forever?

The Guardian: BuzzFeed to use AI to ‘enhance’ its content and quizzes – report

The Telegraph: Shopping habits diagnosing ovarian cancer?

According to a new study, data from loyalty cards could indicate early signs of ovarian cancer in women up to eight months before diagnosis. Women suffering from ovarian cancer in its earliest stages may experience problems such as bloating and indigestion, and so many turn to over-the-counter medication to resolve these issues, believing it to be nothing serious. It may now be possible to harness this purchasing data to diagnose women with the disease earlier.

Early diagnosis is vital when it comes to treatment. 93% of women diagnosed with the disease in its earliest stages survive for five years or more, whilst the outlook is much less positive when diagnosed at the latest stage, as low as 13 percent. Thus, this data could be a gamechanger and could save the lives of thousands of women.

The Telegraph: How your shopping habits could help diagnose ovarian cancer eight months earlier

Wired: ChatGPT as a means of cheating on homework

As mentioned earlier, ChatGPT is currently causing quite the sensation in the tech world. The chatbot is able to respond to a vast array of queries with astounding accuracy; it can solve computer bugs, answer problem-solving questions and even recommend reading material based on your taste in books.

Whilst many companies are already rushing to adopt the technology (see above), others are less enthused about the prospect of ChatGPT infiltrating their workplace. Teachers, in particular, feel like it might spell disaster for their students. New York City’s board of education has already banned it as a pre-emptive measure.

As we know, the technology has the potential to answer pretty much any question you throw at it. Does that mean that students could ask the chatbot their homework questions and copy the answers?

Well, when some teachers put this to the test by asking the technology some typical questions themselves, the answer turned out to be a pretty resounding no. Whilst ChatGPT has the ability to answer questions and solve problems itself, there is one consistent flaw: the answers don’t read as authentically human. No 11-year-old child would be capable of answering the questions in the way that the chatbot does, and equally, no GCSE student would plausibly be able to submit an essay written by the software. So, while ChatGPT has its fair share of uses, the days of students realistically being able to use it to cheat in exams and on their homework are still a long way off.

Wired: ChatGPT Is Coming for Classrooms. Don’t Panic

BBC: Covid-19 modelling data draws to a close

It seems like the dark days of the pandemic might well and truly be behind us. The UK Health Security Agency announced at the beginning of this month that it will stop publishing Covid-19 modelling data. It has been deemed no longer considered necessary for the sake of public health, due to the fact that people are living with and managing the virus in a much more controlled way, thanks to vaccines.

When we were still in the throes of the pandemic, many of us were religiously checking the R-rate (reproductive rate of the virus), as it was being updated weekly. Since April, this has been reduced to a fortnightly update, and now, it will no longer be tracked as vigorously. From now on, the medical community will continue to monitor the disease, reintroducing the modelling data if necessary (if a new variant of concern emerges, for example).

BBC: UK Covid modelling data to stop being published

Financial Times: Getty Images files lawsuit against Stability AI

Generative AI may seem like a fun, innocent tool at first. Many of us have dabbled with the LensaAI app to see what we’d look like as cartoon characters after the platform went viral on social media. But how ethical is this technology in reality? Generative AI cannot exist in a vacuum. It’s built off the back of billions of pieces of artwork made by real, human creators.

In light of this, Getty Images has filed a copyright claim in the UK High Court against one such platform, Stability AI, which is a free tool that generates images for users. Getty claims that Stability AI copied millions of images – many of which potentially came from Getty’s store of over 135 million images – in making their technology a reality.

Tech companies across the globe will be particularly invested in this case. The way it unfolds will have huge implications in the future. If the court finds that it is not okay for Stability AI to process artists’ work for their own purposes without giving credit or compensation and that this is a copyright infringement, similar AI tools will all need change tack to avoid similar lawsuits themselves. Whatever happens, this case could significantly change the course of AI development in the UK.

Financial Times: Art and artificial intelligence collide in landmark legal dispute

Share this article

Latest post
How To Stop Your CV Going Into A Black Hole