LinkedIn, the professional networking platform owned by Microsoft, has announced a significant change to its privacy policy. Starting November 3, 2025, the company will begin using user data, including profile details, public posts, and feed activity, to train its artificial intelligence models. This policy will affect users in the UK, EU, Switzerland, Canada, and Hong Kong, and also aims to support personalized advertising across Microsoft's broader network.
While the new policy sets a default opt-in for users, individuals have the option to opt out of this data collection. This move raises questions about data privacy and the control users have over their professional information in the age of generative AI.
Key Takeaways
- LinkedIn will use user data for AI model training and personalized ads starting November 3, 2025.
- This policy applies to users in the UK, EU, Switzerland, Canada, and Hong Kong.
- The default setting is opt-in, but users can manually opt out through their privacy settings.
- Data includes profile information, public posts, and feed activity.
- US-based users have already been subject to similar data collection for some time.
Understanding the New Data Policy
The updated policy means that the professional information shared on LinkedIn, such as resumes, skill endorsements, and professional discussions, could become part of the dataset used to develop and refine Microsoft's AI technologies. This data will also contribute to creating more targeted advertising experiences across Microsoft's various platforms.
For many users, the idea of their career history and professional insights being fed into generative AI models might feel unsettling. LinkedIn currently boasts over one billion members worldwide, making it a vast repository of professional data. The decision to make opting in the default choice suggests an expectation that a significant number of users may not actively change their settings.
Quick Fact
LinkedIn has more than one billion members globally, representing a significant source of professional data.
Impact on Data Privacy and Security
The integration of professional data into AI models raises concerns about potential security implications. LinkedIn is already a target for cybercriminals and fraudsters who exploit publicly available information for social engineering and phishing attacks. By providing richer, more contextual data to AI models, there is a risk that automated spear-phishing attacks could become more sophisticated and convincing.
Cybersecurity experts warn that AI-trained on comprehensive profile data could generate highly personalized and credible malicious content, making it harder for individuals to distinguish genuine communications from fraudulent ones. This could have significant consequences for both individual users and the organizations they represent.
"Allowing richer data to be fed directly into AI models could increase the risk that automated spear-phishing attacks become more credible because the models have more real profile data and context to work from."
How to Opt Out of AI Training and Ad Targeting
Users who wish to prevent their data from being used for AI training and personalized advertising must take specific steps within their LinkedIn settings. The process involves navigating through several privacy options.
Steps to Opt Out of Generative AI Improvement
- Go to LinkedIn.
- Select Settings & Privacy.
- Click on Data Privacy.
- Find the option labeled “Data for Generative AI Improvement".
- Toggle this setting to "Off".
This action specifically stops your data from feeding into LinkedIn's AI training models.
Background Information
The increasing use of AI across tech platforms has led to a growing demand for data to train these complex models. Companies often use data from their vast user bases to improve AI accuracy and functionality, leading to ongoing debates about user consent and data ownership.
Minimizing Microsoft-Wide Ad Targeting
To further control how your data is used for advertising across Microsoft's network, additional settings need to be adjusted. These options are located within the same Data Privacy section.
- Scroll down in the Data Privacy section.
- Locate and toggle "Ads off LinkedIn" to "Off".
- Toggle "Data from others for ads" to "Off".
- Toggle "Measure ad success" to "Off".
- Toggle "Share data with affiliates and partners" to "Off".
Setting these options to "Off" helps to significantly reduce the chances of your profile data being used for broader Microsoft ad targeting.
Implications for Corporate Environments
While individual users can manage their own privacy settings, the situation is more complex for companies. Employers do not have direct control over their staff's personal LinkedIn profiles. This means organizations need to proactively inform their employees about the new policy and the steps required to opt out.
Companies should consider updating their social media policies. This includes reminding staff about the default opt-in for AI training and encouraging them to review their privacy settings. It also serves as an opportune moment to reiterate the general dangers of oversharing personal and professional information on any social network, especially given the rising threat of AI-powered cyberattacks.
Regional Differences
Users in the United States have already been subject to AI training data collection from their LinkedIn profiles for some time, indicating a staggered rollout of these policies.
A Broader Industry Trend
LinkedIn's new data policy is part of a larger trend observed across the technology industry. More and more tech companies are adopting an "anything goes" approach to collecting and utilizing user data for training their AI systems. This practice is not unique to LinkedIn or Microsoft; it reflects a broader societal challenge regarding data governance and user consent.
As AI technology continues to advance, the demand for vast datasets will only grow. This places the onus on users to remain vigilant about their digital footprints and actively manage their privacy settings across all online platforms. Understanding these policies and exercising available controls is essential for protecting personal and professional information in an increasingly AI-driven world.





