Fake LinkedIn profiles use AI-generated portraits to impersonate companies

Fake LinkedIn profiles use AI-generated portraits to impersonate companies

Creating fake social media accounts to trick people isn't a new tactic, but there's something sinister about this new campaign that sets it apart from the crowd.

An in-depth analysis published on the KrebsOnSecurity blog claims that cybercriminals use artificial intelligence (AI) to create profile pictures of non-existent people and associate this information with the stolen job description (opens in a new tab) to real people on LinkedIn .

In this way, they create fake profiles that for most people are almost impossible to identify as fake.

Many use cases

Users have spotted a growing trend where suspicious accounts try to access various LinkedIn groups by invitation only. Group owners and admins can only figure out what's going on after receiving dozens of such requests at once and seeing that almost all profile pictures look alike (like inside, same angle, same size face, similar smile, etc.).

The researchers say they contacted LinkedIn customer support, but so far the platform hasn't found their silver bullet. One way to overcome this challenge is to require certain companies to submit a full list of employees, and then ban any account that claims to work there.

In addition to not being able to determine who is behind this flood of fake professionals, researchers are also struggling to figure out what exactly it is for. Apparently most of the accounts are not supervised. They do not post anything and do not respond to messages.

Cybersecurity firm Mandiant believes hackers are using these accounts to try to gain roles in cryptocurrency companies, as the first step in a multi-stage attack aimed at draining company funds.

Others believe that it is part of the old romance scam, where victims are lured by pretty pictures to invest in fake crypto projects and trading platforms.

Additionally, there is evidence that groups like Lazarus use fake LinkedIn profiles to distribute info thieves, malware, and other viruses to job seekers, especially in the cryptocurrency industry. And finally, some believe that bots could be used in the future to amplify fake news.

In response to the KrebsOnSecurity investigation, LinkedIn said it was considering the idea of ​​domain verification to address the growing problem: "It's an ongoing challenge and we're constantly improving our systems to stop fakes before they're 'offline'. ". LinkedIn said in a written statement. statement.

"We stop the vast majority of fraudulent activity we detect in our community: about 96% of fake accounts and about 99,1% of spam and scams. We're also exploring new ways to protect our members, like extending the email domain verification Our community is made up of genuine people having meaningful conversations and always increasing the legitimacy and quality of our community.

Via: KrebsOnSecurity (Opens in a new tab)