At Forter, one of our primary missions is to make online commerce safe and seamless for everyone. Artificial intelligence (AI) and machine learning (ML) are core to how we detect and prevent fraud, enable trusted identities, and protect payments worldwide. With these powerful technologies comes a responsibility: to ensure that AI is used in a way that is ethical, responsible, transparent, and aligned with our values.
Using AI in an ethical and responsible manner is critically important to Forter’s identity. One of our core company values is to “Do What’s Right”, and we apply this principle to every AI decision. That means adhering to standards of transparency, fairness, accountability, privacy, security, and reliability is not just part of compliance, it is part of who we are. We scrutinize our use of AI to identify and address potential risks, while continuously innovating to deliver solutions that meet our customers’ needs, uphold human rights, encourage inclusion, and power a more trusted digital economy.
We believe that responsible AI is responsible business. By holding ourselves accountable, we help ensure that AI delivers on its promise: creating an inclusive, secure, and seamless digital economy that benefits businesses, consumers, and society at large.
Forter’s Responsible AI Principles
Forter has developed a Responsible AI Framework that guides how we design, build, and operate AI. This framework reflects our commitment to six key principles:
- Transparency: Transparency is core to Forter’s Responsible AI Framework. We inform our customers and their end users that AI and ML will be used in connection with our services to make decisions that may affect consumers’ ability to transact. We strive to provide clarity and consistency in informing our customers and their end users when and why AI is employed in our technologies, the intent of the AI and the potential impact on individuals, as well as the data used, and the security and privacy controls applied to our models, and information regarding the reasoning behind our decisions, in a manner that is accessible, transparent, and understandable. More information about our services can be found in our Services Privacy Policy. We also encourage open dialogue with our customers and their end users, and provide channels for them to raise questions or concerns.
- Fairness: We have designed our AI systems to expand access to eCommerce, reduce bias, and avoid discriminatory outcomes. Our identity-based approach aims to reduce false declines and enable more legitimate customers to transact. We also strive to identify and remediate harmful biases within our algorithms, training data, and applications, through ongoing training and re-calibration of our AI platform.
- Accountability: Accountability for AI solutions is essential to responsible development and operations throughout the AI lifecycle. As a company that develops, deploys, and uses AI solutions, Forter seeks to take responsibility for its work, primarily by implementing appropriate governance and controls to ensure that our AI solutions operate as intended. Oversight is embedded in our processes, with governance by a dedicated Responsible AI Oversight Committee.
- Privacy: Forter has built privacy practices into our product development lifecycle. These practices are designed to ensure that we design, build, and operate privacy-enhancing features, functionality, and processes into our product and service offerings. We ensure that our processing of personal data in connection with our AI systems is permitted, purpose-aligned, proportional, and fair, and that it complies with global regulatory requirements.
- Security: AI systems should be resilient and protected from malicious actors. Forter seeks to build AI technologies by leveraging leading security practices, drawing on our secure development lifecycle to maximize resilience and trustworthiness. We also seek to employ robust security measures and industry best practices to protect personal data and ensure that personal data is not leaked or disclosed. To meet the unique characteristics of AI, we have implemented certain security controls for AI that are intended to improve attack resiliency, data protection, privacy, threat modeling, monitoring, and third-party compliance.
- Reliability: Forter prioritizes innovation, and we seek to design and test our AI systems and their components for reliability. As part of our responsible AI commitment, we endeavor to review AI-based solutions and embed controls in their development lifecycle to maintain consistency of purpose and intent when operating in varying conditions and use cases. Where we identify that an AI solution has potential impacts on individuals, we impose additional controls as appropriate. The result is that we continuously test and monitor our models to ensure accurate, consistent, and replicable results, and we adapt them where needed.
Our Responsible AI Principles in Action
Principles matter most when they are put into action. At Forter, we have built structures, processes, and safeguards that make responsible AI a daily practice, shaping how we develop our models, govern our systems, and engage with customers and regulators. These measures ensure that our AI not only powers growth and trust in digital commerce, but does so in a way that is fair, transparent, and accountable.
Some of the structures we’ve built to translate our Principles into action include:
- Guidance and Oversight: We have created a cross-functional Responsible AI Oversight Committee that is tasked with advising teams, setting policy goals, and reviewing AI applications, processes and initiatives. This ensures that our AI practices meet the highest ethical and regulatory standards.
- Embedded Controls and Safeguards: We integrate security, privacy, and ethical processes into product development, assessing models for risks, mitigating bias, and training employees. We also restrict the use of certain sensitive or protected characteristics as inputs into our AI systems, and we monitor for unintended bias or discriminatory effects in our models.
- Continuous Review: We run ongoing testing, impact assessments, and customer feedback loops to refine our models and mitigate risks.
- External Engagement: We monitor global AI regulations, collaborate with industry experts, and continuously update our practices to reflect evolving standards.