1. Introduction, Purpose, and Scope
AvePoint Inc, “AvePoint,” the “Company,” “we,” “us,” or “our”) has implemented this AI Policy which includes Responsible AI and Generative AI (this “Policy”) to outline guidelines and best practices for the use of AI within our organization to ensure the protection of data privacy and confidentiality, and best practices. All employees and contractors who utilize AI and AI Systems must adhere to this policy to ensure the legal, responsible, and ethical use of these technologies. This policy applies to all employees and contractors who utilize AI within our organization.
At AvePoint, we are committed to the responsible development, deployment, and use of Artificial Intelligence (AI) technologies. Our mission is to ensure that AI benefits society, promotes fairness, and aligns with AvePoint’s core values of integrity, trust, accountability, and security. In line with this commitment, we align with NIST AI Risk Management Framework (RMF) to guide the responsible and ethical use of AI in our products, services, and operations.
This policy establishes a set of principles and practices for managing AI risks in alignment with NIST’s guidelines, ensuring that AI systems are transparent, reliable, and accountable while mitigating potential harms.
This policy applies to all AI systems developed, deployed, or operated by AvePoint, including machine learning models, data-driven automation, AI-powered analytics, and decision support systems. It is relevant to all teams and employees involved in AI-related activities, including research, development, data management, testing, deployment, and maintenance.
This Policy is for AvePoint Inc. internal reference and use only and it may not be shared with external partners, or other third parties, except that this Policy may be shared with external auditors and advisers upon prior written approval from the AvePoint’s Legal Department. AI definitions and examples are in Appendix A.
This policy and procedure document applies to all AI systems developed, deployed, or operated by AvePoint, including but not limited to:
- Machine learning models
- AI-powered automation tools
- Data-driven decision support systems
- AI-enhanced analytics and reporting systems
- Generative AI systems such as ChatAVPT, Copilot, etc.
The procedures outlined below apply to all teams and employees involved in AI-related work, including general use, research, design, development, testing, deployment, maintenance, and monitoring. For Guiding Principles of Responsible AI please see Appendix B
2. Generative AI Policy
Data Privacy and Confidentiality
- 1. Generative AI, on company devices, must only be used for business purposes, and not for personal use.
- 2. All communication with Generative AI should be considered confidential and treated as such.
- 3. Employees and contractors must not share any sensitive or confidential information with Generative AI, including but not limited to personal information, financial information, or trade secrets, company policies, any nonpublic information without prior approval of the legal department.
- 4. Any data generated by Generative AI must be handled with the same level of confidentiality as any other company data.
- 5. Privacy Policy and Terms of use of the Generative AI must be reviewed by legal before any service is used.
- 6. There may be specific limitations of use of different Generative AI depending on data being entered.
- 7. Use of Generative AI must be reviewed if personal data or nonpublic information is sent to the platform. This process is referred to as a Privacy Impact Assessment.
- 8. If obtaining a professional account or Free account a Vendor Risk Assessment must be completed.
NOTE: Please refer to the Data Classification Policy, and/or reach out to AvePoint’s PSR Team (PSR@avepoint.com) should you have any questions as to whether information is considered sensitive or confidential, as well as for additional guidance. This should be done before any information is entered into Generative AI, out of an abundance of caution.
Access to Generative AI
- 1. Employees must select their AI platform from the list of Approved Vendors, Solutions and AI Solutions, available through the Privacy and Security resource site.
- 2. Access to Generative AI is limited to authorized employees and contractors only.
- 3. Login credentials must be kept confidential and not shared with anyone else.
- 4. Users must log out of Generative AI when not in use and must not allow anyone else to use their account.
Responsible Use of Generative AI
- 1. Generative AI must be used in a responsible and ethical manner.
- 2. Users must not engage in any behavior that may violate company policies, laws, or ethical standards while using Generative AI.
- 3. Users must not engage in any behavior that may harm the reputation of the company or its clients while using Generative AI.
- 4. Engineers must not load AvePoint code into Generative AI to Optimize except for those from the list of Approved Vendors, Solutions and AI Solutions, available through the Privacy and Security resource site.
- 5. Engineers should consider if using code examples by Generative AI may limit protections of the source code.
- 6. Before using Output, a trademark or copyright search should be performed by legal (note: US Copyright office will soon be releasing new guidance for generative AI. – See Appendix B – for Some recent legal complaints related to Generative AI).
Data Retention and Deletion
- 1. Any suspected breach of data privacy or confidentiality must be reported immediately through the “See Something Say Something” portal.
- 2. Any suspected breach of this policy may result in disciplinary action, up to and including termination of employment.
3. Responsible AI Policy and Procedure
Identify: Understanding AI Risks and Governance
Policy: AvePoint will proactively identify, assess, and document the risks associated with each AI system from design through deployment and maintenance.
Procedures:
- AI Risk Assessment:
- Before developing any AI system, teams must conduct a thorough risk assessment that includes ethical, legal, and technical considerations. This process is known as an AI Impact Assessment.
- The assessment will identify potential risks (e.g., bias, data privacy issues, security concerns) and outline risk mitigation strategies.
- AI system developers will maintain documentation of these risks and mitigation strategies, including the potential social, ethical, and legal implications of each AI application.
- Risk classification system: The AI Act establishes a tiered compliance framework consisting of different categories of risk and different requirements for each such category. All AI systems will need to be inventoried and assessed to determine their risk category and the ensuing responsibilities. • Prohibited systems: Systems posing what legislators consider an unacceptable risk to people’s safety, security and fundamental rights will be banned from use in the EU. • High-risk AI systems: These systems will carry the majority of compliance obligations (alongside GPAI systems - see below), including the establishment of risk and quality management systems, data governance, human oversight, cybersecurity measures, post-market monitoring, and maintenance of the required technical documentation. (Further obligations may be specified in subsequent AI regulations for healthcare, financial services, automotive, aviation, and other sectors.) • Minimal-risk AI systems: Beyond the initial risk assessment and some transparency requirements for certain AI systems, the AI Act imposes no additional obligations on these systems but invites companies to commit to codes of conduct on a voluntary basis.
- Stakeholder Identification:
- Identify stakeholders (e.g., end users, affected communities, regulatory bodies) and engage them during the risk assessment phase.
- Gather input from stakeholders about concerns or challenges that may arise with the use of AI systems.
- AI Governance Structure:
- Establish an AI Governance Committee responsible for overseeing AI risks, compliance, and ethical guidelines.
- Ensure that there is clear accountability for AI system outcomes, both in terms of performance and ethical considerations.
- The committee will work with subject-matter experts to ensure that all AI systems are aligned with AvePoint’s values and the NIST RMF.
Protect: Safeguarding AI Systems and Reducing Risks
Policy: AvePoint will implement measures to mitigate the risks identified in the “Identify” phase, ensuring AI systems are secure, fair, and aligned with ethical standards.
Procedures:
- Bias Detection and Mitigation:
- AI teams will implement methods to detect and reduce biases in data and algorithms.
- Regular audits will be conducted to identify and mitigate biases throughout the AI system lifecycle.
- Use diverse and representative datasets during training and model validation to minimize the risk of biased outcomes.
- Security and Privacy Safeguards:
- Implement security controls to protect AI models, data, and systems from unauthorized access, tampering, and attacks.
- Ensure compliance with privacy regulations such as GDPR, EU AI Act and CCPA, including data anonymization and encryption practices.
- Conduct privacy risk assessments for each AI system that handles sensitive data.
- Human Oversight:
- Ensure that appropriate human oversight is built into the design and deployment of AI systems, particularly in high-stakes applications (e.g., healthcare, finance).
- Human-in-the-loop (HITL) systems will be employed in critical decision-making processes to ensure AI-driven decisions align with human judgment and values.
- Safeguarding Transparency:
- Provide clear documentation on how AI models work, including the data sources, algorithms, and assumptions used in model design.
- Ensure AI systems are interpretable, with clear explanations of how decisions are made available to end-users.
Analyze: Continuously Assessing AI System Performance and Risks
Policy: AvePoint will continually monitor and assess the performance, fairness, and safety of AI systems to ensure they function as expected and mitigate risks.
Procedures:
- Performance and Fairness Monitoring:
- Continuously monitor AI systems post-deployment to assess whether they are functioning as expected, including monitoring for potential degradation in performance over time.
- Regularly evaluate AI models for fairness, ensuring they do not unintentionally discriminate against certain groups or individuals.
- Implement fairness metrics and conduct regular audits to check for bias, fairness, and equity in the AI system's outputs.
- Impact Assessment:
- Perform periodic Impact Assessments to evaluate the broader societal, ethical, and environmental impacts of AI systems.
- Reassess risks and identify new concerns that may arise as AI technologies evolve or when the model is used in new contexts.
- Continuous Feedback Mechanism:
- Develop a feedback loop that allows users and stakeholders to report issues, errors, or concerns about AI system behavior.
- Actively seek feedback from diverse groups to identify potential blind spots or unanticipated risks.
Respond: Addressing AI Issues and Improving Systems
Policy: AvePoint will respond effectively to AI-related incidents, ensuring that issues are addressed quickly, and systems are improved based on feedback and monitoring.
Procedures:
- Incident Response Plan:
- Develop and maintain an AI Incident Response Plan that outlines how to respond to AI failures, biases, security breaches, or other issues.
- The plan will include steps for identifying, mitigating, and communicating issues to relevant stakeholders, including customers, regulators, and the public.
- Affected stakeholders will be notified promptly in case of serious incidents, and remediation will be initiated without delay.
- Corrective Actions and System Improvements:
- After identifying an issue with an AI system, AvePoint will take immediate corrective actions, which may include updating the model, adjusting the data, or redesigning the system to address the root cause.
- Systems will undergo continuous improvement through iterative testing, updates, and refinements, ensuring that they become safer, more reliable, and more ethical over time.
- Documentation and Reporting:
- Document all incidents, corrective actions, and improvements made to the AI system.
- Maintain transparency by reporting to relevant stakeholders (e.g., internal teams, customers, regulators) regarding the incident and the steps taken to address it.
Compliance and Reporting
Policy: AvePoint will ensure that all AI systems comply with relevant legal, regulatory, and ethical requirements.
Procedures:
- Compliance Checks:
- Regularly assess AI systems for compliance with applicable laws, including data protection regulations (e.g., GDPR, CCPA, EU AI Act), and ethical standards.
- Maintain records of compliance activities and audits to demonstrate adherence to AI regulations and standards.
- External Audits:
- Engage external auditors to conduct independent reviews of AI systems, focusing on fairness, transparency, and risk mitigation.
- Review and address audit findings to ensure continuous alignment with responsible AI practices.
Appendix A - Generative AI
Generative AI refers to a type of artificial intelligence that is designed to generate or create new content, such as images, music, text, or even entire virtual environments, which did not previously exist. Generative AI models are trained on large datasets of existing content and then use this knowledge to create new content that is similar in style or format to the original data.
Approved AI Solutions. AvePoint maintains a list of approved AI solutions for internal usage on Privacy and Security resource site. These solutions have undergone privacy, security, and AI risk assessments and are approved for internal use. Any AI solutions not on this list must be reviewed and approved by both Legal and Security departments before they can be deployed internally. For more information on having an AI solution evaluated, please contact security@avepoint.com.
Appendix B
Guiding Principles of Responsible AI
AvePoint’s Responsible AI policy is grounded in the following principles, inspired by NIST’s RMF for AI:
- Accountability: We take responsibility for the outcomes and impacts of AI systems. Clear roles and accountability structures will be established for the design, deployment, and monitoring of AI technologies.
- Transparency: We commit to providing clear and accessible explanations of how AI systems function, how decisions are made, and the data sources used. We strive to ensure that AI technologies are understandable and interpretable by all stakeholders.
- Fairness: Our AI systems will be designed and tested to minimize biases and ensure equitable treatment of individuals, regardless of their demographic, background, or circumstances.
- Privacy and Security: AvePoint will prioritize data privacy and security in all AI systems, ensuring compliance with applicable privacy laws (such as GDPR, CCPA) and safeguarding user data from unauthorized access or misuse.
- Reliability and Safety: AI systems will be developed to operate reliably under a variety of conditions and to function safely throughout their lifecycle. We will work to minimize the risk of harm to individuals and society from malfunction, failure, or misuse.
- Human-Centered Design: AI will be designed to complement and augment human decision-making, not replace it. Human oversight will be incorporated into AI processes where necessary to ensure ethical outcomes.
Policy Compliance
Compliance Measurement
The Security team will verify compliance to this policy through various methods, including but not limited to, business tool reports, internal and external audits, and feedback to the policy owner.
Exceptions
Any exception to the policy must be approved by the Security team in advance.
Non-Compliance
Failure to comply with this policy may result in disciplinary action, up to and including termination of employment. The company reserves the right to modify this policy at any time without notice.
Related Standards, Policies and Processes
- AvePoint Data Classification Policy
- AvePoint Data Handling Policy
- AvePoint Privacy and Information Security Policy
Validity and Document Management
This document is valid as of March 9, 2026
The owner of this document is the Director of Privacy, Security & Risk, who must check and, if necessary, update the document at least once every year.