Privacy considerations for the MCP Server
Carefully evaluate the privacy considerations and recommendations before connecting Matomo MCP to any external LLM and ensure appropriate safeguards are in place.
Privacy and regulatory
Controller
You, as the Matomo customer, remain the data controller for analytics data, including:
- the analytics data processed within your Matomo and the Matomo MCP;
- the decision to share such data via MCP with third-party AI tools.
The further processing performed by the AI provider must be assessed separately to determine whether that provider acts as a processor or as an independent controller. In your capacity as controller of the Matomo instance, the MCP processing and the LLM processing, you are responsible for ensuring compliance with all relevant data protection laws.
Processor
We continue to process your analytics data in your instance and via MCP as processors under your instructions. Processing by Matomo is carried out as a processor within the meaning of Article 28 GDPR and is governed by the applicable Data Processing Agreement.
Independent controller /processor
When using MCP with AI tools, the AI provider (e.g. OpenAI, Anthropic, Microsoft) acts as a recipient of the data and processes the data independently of Matomo. As a controller, you should assess:
- whether the AI provider acts as a processor or independent controller – this will depend on the applicable service terms;
- whether a Data Processing Agreement (DPA) is required and if so, whether it is included in the terms that govern your subscription to OpenAI or Claude or whether it can be separately entered into;
- whether data may be used for model training.
As a controller you should also consider the following:
- whether you have a valid legal basis to process your analytics data using MCP and LLM;
- what data is exposed to the LLM through MCP;
- who within your organisation is authorised to grant AI tools access to your analytics data via the MCP server;
- which AI service receives the data;
- where that AI service processes data;
- what obligations apply to that provider under applicable data protection laws (e.g., GDPR) or AI laws (e.g., EU AI Act).
The assessment below can help you to understand the potential risks associated with sharing analytics data with external AI tools as well as your privacy obligations.
Privacy assessment
Before enabling the MCP for use by administrators or internal users, your organisation should perform a privacy assessment to identify potential risks and determine the applicable privacy obligations associated with sharing analytics data with external AI tools.
1. Legal basis for processing
If your processing activity is covered by GDPR or similar privacy law, you must ensure that you have a legal basis for processing analytics data before enabling MCP or exposing additional data to AI tools. In the EU, your processing of analytics data will either be processed based on consent (in most cases) or on legitimate basis (where certain forms of analytics are exempt from consent).
The use of MCP in combination with AI tools may constitute a new or extended processing activity. You must assess whether this use is compatible with the original purpose of data collection in accordance with Article 6(4) GDPR. In particular, you should consider:
- whether data is disclosed to new recipients (AI providers);
- whether the nature of processing (AI-based analysis) changes the risk profile;
- whether additional inferences are generated.
Where compatibility cannot be established, a new legal basis (including consent where required) must be obtained.
2. Personal data exposure (Data minimisation)
Evaluate whether the analytics data retrieved through the MCP contains personal data. This will depend on the privacy settings configuration in your instance. If the MCP exposes visitor-level data or identifiable information, additional safeguards may be required. Examples include:
- visitor identifiers;
- user IDs;
- URLs containing personal information;
- event metadata associated with identifiable individuals;
- location or device data combined with other identifiers.
Consider whether the data shared through the MCP is:
- aggregated and anonymised;
- pseudonymised (for example, hashed identifiers);
- potentially identifiable when combined with other data.
Even pseudonymised analytics data may become identifiable when combined with AI outputs or external datasets. This may increase privacy risks beyond the original analytics use.
3. Transparency and accountability requirements
When you enable MCP to expose the analytics data in your Matomo instance to LLM providers (e.g. OpenAI or Anthropic), any such AI tool providers will need to be listed as data recipients in your privacy policy, regardless of whether they act as processors or independent controllers. You should also:
- update any Consent Management Platforms (CMP) configurations to reflect these recipients;
- update your records of processing activities (ROPA) and
- revise any relevant internal governance documentation.
These documents should clearly explain:
- that AI tools may process analytics data retrieved from Matomo;
- which categories of data may be processed;
- which external providers receive the data; and
- the purpose of using AI tools with analytics data.
Transparent communication helps ensure that users and stakeholders understand how their data may be processed.
4. Data protection impact assessment (DPIA)
Your existing Data Privacy Impact Assessment (DPIA) should be updated to reflect the additional data processing by AI tools. If you do not have a DPIA, reassess if you should have one in place. Note that if you are collecting identifiable visitor data into Matomo Analytics (e.g. full IP address, User ID, URLs with personal data, or visits log and visitor profiles) this data will be accessible to the AI tool. You may need to conduct a Data Protection Impact Assessment to reflect this.
A DPIA helps evaluate potential risks to individuals and identify appropriate mitigation measures. You should assess this requirement in accordance with Article 35 GDPR and applicable supervisory authority guidance. Where required, you can request documentation or explanations of Matomo’s security architecture to support your assessment.
5. AI provider agreements
You should review the contractual terms of the AI provider used with the MCP. Important considerations include:
- whether the provider offers a Data Processing Agreement (DPA);
- whether submitted data may be used for model training; and
- whether the provider offers enterprise privacy protections.
You should also verify:
- whether sub-processors are used;
- how long data is retained;
- whether data is reused for secondary purposes (e.g. model training).
Enterprise subscriptions often provide stronger data protection guarantees than free-tier AI services.
6. Data transfers outside the EEA
Many AI providers process data in the United States or other jurisdictions outside the European Economic Area (EEA). You must verify the following:
- where the AI provider processes data;
- whether appropriate data transfer safeguards are in place;
- whether additional contractual protections are required;
- whether a Data Transfer Impact Assessment (DTIA) is required; and
- whether the provider participates in recognised international data transfer frameworks.
7. EU AI Act considerations
While the MCP itself is not an AI system on its own, the external AI tools used with it are. Depending on how these tools are used:
- your organisation may be considered a deployer of AI systems under the EU AI Act;
- specific obligations may apply, including ensuring appropriate human oversight and transparency.
Depending on how your Matomo Analytics instance and external AI tools are used, assess whether their implementation could fall outside the low-risk category under the EU AI Act. You should assess whether your use case could fall outside the low-risk category, taking into account factors such as:
- the level of automation in decision-making;
- whether visitor-level data is processed; and
- whether AI systems generate recommendations or automated actions.
AI systems may produce inaccurate, incomplete, or misleading responses. The behaviour and outputs of these models are controlled by the AI provider. Matomo does not control how third-party AI models interpret or present data retrieved through the MCP.
Privacy Recommendations
1. Disable MCP by default
The MCP should remain disabled unless it is explicitly required. Enabling MCP allows external AI tools to retrieve analytics data through the MCP interface, so organisations should only activate it after evaluating their security and compliance requirements.
2. Use granular permissions
Access to the MCP server requires authentication using a Matomo API token. API permissions determine which data can be accessed.
To minimise risk:
- restrict MCP access using API permissions and authentication tokens;
- create separate API tokens for different use cases or tools;
- limit tokens to the minimum permissions required; and
- implement token lifecycle controls.
This ensures that AI tools can only retrieve authorised data.
3. Minimise data set
Only expose the minimum analytics data necessary for your specific use case when enabling MCP. Prefer aggregated or anonymised data over visitor-level data, and avoid sharing identifiers, full URLs, or detailed event data unless strictly required.
Avoid combining data sets that could result in reidentification of visitors. Do not combine analytics data accessed via MCP with other data sets (e.g. CRM, user accounts, or third-party data) in a way that could enable the identification of individual visitors. Even pseudonymised data may become identifiable when merged or analysed by AI tools.
4. Confidentiality and business risks
Sharing analytics data with AI tools may expose business-sensitive metrics, campaign performance data or internal operational insights. Ensure that data shared is proportionate and confidentiality risks are assessed before proceeding. You should never share such information with public/free versions of AI tools.