Get a demo of our new product!🔥
Last Updated: December 11, 2025
Overview
Kizuna uses automation and AI to help Customers view and organize background-check information they have obtained from Consumer Reporting Agencies (CRAs) and other systems they control. This page explains, at a high level:
where automation and AI show up in the Platform,
how we keep humans in control, and
how we support Customers’ obligations under emerging AI and automated-decision rules.
This page supplements, and does not replace, the Kizuna Privacy Policy and your agreement with Kizuna. It is provided for transparency and compliance support only. Kizuna is not a Consumer Reporting Agency (CRA) and does not make hiring decisions. Kizuna does not add new public-record data to background checks, does not alter the substance of reports provided by CRAs, and does not furnish consumer reports or any compiled background ‘dossier’ about a consumer to any third party. We provide decision-support software that Customers use alongside their own policies, legal advice, and human review.
How Kizuna Uses Automation & AI
Within the Platform, automation and AI are used to support how Customers view, filter, and work with information they already obtain from Consumer Reporting Agencies and other sources. For example, features may help standardize how information is displayed, surface items that appear to need human review, or draft template text for Customer approval.
We deliberately describe these capabilities at a high level. Internally, we may use a mix of rules engines, statistical models, and modern ML/LLM techniques, but the core idea is the same: Kizuna helps your reviewers access and organize information in a consistent format, faster.
Humans in Control & Roles
Kizuna is designed so that people remain responsible for final decisions:
The Platform surfaces information, context, and documentation prompts, but does not change what information appears in the underlying background reports obtained from CRAs.
Customers configure their own policies and decide how much weight, if any, to give Kizuna outputs.
Final decisions about employment, engagement, or similar outcomes are made by Customers, not by Kizuna.
From a legal-role perspective:
For Candidate data (Report Artifacts and Candidate Context), Kizuna generally acts as a service provider / processor and our Customers act as the business / controller.
For AI-enabled features, Kizuna is the developer of the tools; Customers are the deployers / users who choose how to incorporate those tools into their hiring or engagement workflows.
Customers are responsible for determining whether their particular use of Kizuna is covered by laws governing automated tools in hiring (such as New York City’s AEDT rules, Colorado’s AI law, or California’s automated decisionmaking regulations) and for meeting any employer-side obligations (bias audits, notices, risk assessments, and so on).
Governance & Risk Management
Because some Customers use Kizuna in connection with employment decisions, we treat AI-enabled features as higher-sensitivity and apply additional governance, including:
Use-case scoping. We identify features that may influence consequential decisions and apply additional design review and documentation.
Data minimization. We work with background-check information that Customers and their chosen CRAs already lawfully possess; we do not build or sell any separate consumer background database. We minimize sensitive identifiers in prompts, logs, and monitoring wherever feasible.
Documentation. Internally, we document the purpose, inputs, and known limitations of material AI-enabled features so they are used as intended and can be explained at a high level to Customers and auditors.
Testing and monitoring. Where it is lawful and technically appropriate, we perform checks to look for unexpected behavior and monitor Customer feedback for signals of potential unfairness or misconfiguration.
Our Platform Terms and Acceptable Use restrictions prohibit using Kizuna to unlawfully discriminate or to substitute Kizuna outputs for independent human judgment and required individualized assessments.
Laws, Audits & Customer Compliance
Several state and local frameworks now specifically address AI in hiring and other “high-impact” decisions (for example, New York City’s Local Law 144, Colorado’s AI Act, and California’s regulations on automated decisionmaking technology).
In that landscape:
Regulators may classify certain employer workflows that include Kizuna as part of a ‘high-risk AI system,’ ‘automated employment decision tool,’ or ‘automated decisionmaking technology’, depending on how those employers design and rely on their overall process.
Kizuna designs and positions its tools as decision-support, not as stand-alone decision engines.
Subject to confidentiality and security requirements, we can support Customers and their independent auditors by providing high-level feature descriptions, configuration/usage metadata, and sample outputs to help them complete required bias audits or impact assessments.
This page also serves as Kizuna’s public statement about our AI-enabled tools and governance program where laws (such as the Colorado AI Act) expect developers of covered systems to publish a summary of the types of systems they make available and how they manage risks of algorithmic discrimination.
What We Don’t Do
To set expectations clearly:
We do not act as a Consumer Reporting Agency (CRA), and we do not compile or furnish consumer reports or background “dossiers” about workers to third parties.
We do not independently approve or deny employment, engagement, housing, credit, or other eligibility outcomes.
We do not sell personal information or share it for cross-context behavioral advertising.
We do not use personal information (including Candidate data) to train generalized AI models for other customers or unrelated products, unless that use is explicitly described in an Order or DPA exhibit and the Customer has opted in.
Relationship to Your Agreement & Contact
This page is informational. Unless an Order or addendum expressly says otherwise:
It does not amend your agreement with Kizuna.
It does not create additional warranties, service levels, or performance guarantees.
If there is any conflict between this page and your Master Services Agreement, Order, Platform Terms, or Data Processing Addendum, your contractual documents will control.
If you are working on an AI-related compliance project (for example, a bias audit, impact assessment, or ADMT inventory) and need additional information about Kizuna’s AI-enabled features, contact:
privacy@kizuna.solutions
(please include “AI Compliance” in the subject line)
