Data Digest #3: DNA Detectives and Hotel Hijacks

Data Digest #3: DNA Detectives and Hotel Hijacks

October 4, 2022

The cogs of the data world are perpetually turning. Data never sleeps. Brace yourself for an exciting overview into some of the top data news stories that have been gracing our screens over the past month.

The Telegraph: catching criminals via ancestry websites

Law enforcement professionals don’t simply give up on “cold cases”. They don’t put up their hands in surrender and admit defeat, leaving the files in a dark room in the corner of a police station to gather dust. Police are always investigating historical cases, in a bid to provide families with some long-overdue justice. And recently, a new technology has helped to crack the code on hundreds of unsolved crimes.

Investigative genetic genealogy uses DNA data alongside traditional genetic genealogy to uncover culprits who are yet to face repercussions for their actions. This technique garnered major media attention four years ago, when distant relatives of murderer and rapist Joseph DeAngelo posted their genetic data to the ancestry website GEDmatch, ultimately leading to his arrest.

IGG has been hailed by many as a breakthrough technological development. However, this technology is rife with ethicality concerns. Those using ancestry websites could never have imagined that their DNA data would end up putting someone they’re related to behind bars. Ultimately, consumers have raised valid concerns about the risk this technology poses of crossing the line. It’s also unlikely to have as much of an impact in the UK as in the US, where DNA databases are known to be more advanced.

The Telegraph: How police are using DNA from ancestry websites to catch murderers

The New York Times: Smartphone data predicts suicides?

Is it possible to predict suicide by using smartphone data? That’s exactly what a group of Harvard psychologists are trying to figure out. A research project currently underway is experimenting with new developments in AI and machine learning to gauge whether it’s possible to predict suicide and prevent it before it happens. Patients participating in the study are being monitored through their smartphones via biosensors, a GPS, questionnaires, etc. The information garnered from these different technologies is fed to researchers at the Harvard psychology department, who then decipher what it can reveal about the patient’s current mental state.

Researchers are keeping an eye out for an array of warning signs, such as the patient’s sleeping pattern or a worrying result on a mood questionnaire. If a patient is flagged as reporting unusual or worrying behaviour, indicating that they plan on hurting themselves, a researcher will get in touch with them via telephone call, and call 911 to attend to them.

With a plethora of data (quite literally) at our fingertips, it makes sense to be tapping into this resource by potentially saving lives. However, this kind of monitoring isn’t without its fair share of controversy. For one thing, it is undoubtedly impossible to predict every suicide in this way. It would be perfectly easy for a patient to mislead clinicians, simply by not being totally candid with their questionnaire answers. There’s also the very real risk of false positive results, which would lead to patients facing the trauma of an unwarranted intervention. And it’s also likely that a vast number of people experiencing suicidal thoughts would not be in the headspace to consent to this type of monitoring in the first place.

The New York Times: Can Smartphones Help Predict Suicide?

BBC: Hacking hotels

The Intercontinental Hotel Group were in for a nasty shock when they discovered that a Vietnamese couple had hacked into the company’s computer system. The couple proceeded to reach out to the BBC on the messaging app Telegram, holding up screenshots like victory trophies to prove that they had indeed gained access to the company’s Microsoft Teams server, as well as their internal Outlook emails.

The pair originally planned a ransomware attack, but when they realized that they had been stopped in their tracks, they went ahead and caused as much damage as they could anyway, purely out of spite. They proceeded to perform a wiper attack, which destroys data and documents, never to be retrieved again. Whilst they did not obtain any customer data, they did manage to get hold of some corporate data.

The hackers infiltrated the system via a misleading email attachment, that led to an employee downloading a deceptive piece of software. They were then able to make their way into the more private quarters of the computer system without much hassle. Why? Because of the absurdly easy and common password: Qwerty1234. The first password you’re warned against using in ICT lessons in primary school. It was equally surprising that 200,000 IHG employees had access to this sensitive content. This incident could serve as a reminder of the importance of putting a watertight security system in place, to protect against malicious attacks such as these.

BBC: IHG hack: ‘Vindictive’ couple deleted hotel chain data for fun

The Guardian: TikTok and children’s privacy

Following an investigation conducted by the Information Commissioner’s Office, TikTok has been issued with a ‘notice of intent’ regarding a potential breach of UK data protection law between May 2019 and July 2020. The app now could face a fine of £27 million. The ICG’s preliminary findings are that TikTok may have processed data belonging to children under the age of 13, without parental consent. As part of this alleged breach, TikTok may have exposed special category data, such as the children’s ethnicities and political opinions.

Whilst the investigation is still very much underway, TikTok representatives dispute the findings of the preliminary investigation thus far. The app is just one of over 50 other online services currently being investigated by the ICU, in a bid to assess whether they too are complying with child data protection laws. Meta also came under scrutiny this year, when they were fined £349 million for allowing teenagers to create online profiles that openly displayed their personal phone numbers. The world of data privacy is a minefield, and companies have a responsibility to protect the most vulnerable users of their online services.

The Guardian: TikTok could face £27m fine for failing to protect children’s privacy

Share this article

Latest post
How To Stop Your CV Going Into A Black Hole