1. Knowledge base
  2. Surveys module
  3. Internal FAQ on security, GDPR & privacy

AI Analytics for Surveys – Internal FAQ on security, GDPR & privacy

As we introduce AI-powered analytics in our survey module, we want to ensure all employees understand the security, GDPR compliance, and privacy measures we have in place.

What AI technology is powering the new analytics feature?

Huma's AI analytics is powered by OpenAI’s technology. It helps analyze survey responses,
identify trends, and provide actionable insights.

How does it work?

If you activates the AI analysis on a completed survey, the survey and responses
are sent to OpenAI along with a prompt we have designed to produce a useful and easily
digestible analysis of the results.

How is employee and customer data protected?

Security is a top priority. All data processed by AI is encrypted both in transit and at rest. No
personally identifiable information (PII) beyond the responses is included in the AI
processing, and the survey’s anonymity settings also apply to the information sent for AI
analysis. 

Is the AI feature GDPR-compliant?

Yes. OpenAI has been through our internal risk assessment process, and both technical and
contractual safeguards are in place to protect data subjects. 

Does OpenAI store or use our data for training?

No. The AI models we use do not retain or use data for training purposes. We have
configured our integration to prevent OpenAI from storing or accessing Huma survey data
beyond the scope of each analysis.

Can employees opt out of AI-powered analytics?

Employees cannot opt out individually, as the AI is applied at the survey module level.
However, survey creators can control whether AI analytics is used for specific surveys.

Who has access to AI-generated insights?

Only authorized users, such as HR teams and survey administrators, will have access to
AI-generated insights. These insights do not contain raw responses but aggregated trends. 

What measures are in place to prevent bias in AI analysis?

Our AI provider continuously evaluates their models for fairness and mitigate bias by training
them on diverse datasets. Additionally, HR teams are encouraged to interpret AI-generated
insights critically and treat them as assistance to, not replacement of, their own
assessments.

What should I do if I suspect a security or privacy issue? 

If you identify a potential security or privacy issue related to the AI analytics feature, report it
immediately to the security & compliance team via either our internal deviation software or at
security@humahr.com.