AI Interviewers Can Be Fairer Than People
In an interview with Hankook Ilbo, Genesislab CEO Lee Young-bok discussed how AI interviewers address fairness issues in hiring, manage bias through quality control, maintain transparency with clients, and approach data privacy and regulation.
Source: Hankook Ilbo (한국일보) — “AI interviewers without nepotism or regional bias can be fairer” Original in Korean
In an interview for the Hankook Ilbo series “Ethics in the Age of AI,” Genesislab CEO Lee Young-bok discussed fairness, ethics, and the regulatory direction of AI-based hiring. The conversation centered on how AI can address inherent problems in human interviews.
Lee identified key flaws in traditional human interviews. Educational prestige creates a halo effect, candidates’ regional origins and personal connections influence outcomes, and interviewers’ moods on any given day shape results. He positioned AI interviewing as a way to escape these variables, enabling evaluation under blind conditions.
On the learning methodology, Lee explained how the system transfers expertise from HR professionals into data. The model learns from 20 to 30 years of experienced interviewer assessments and incorporates the Behavioral Event Interview (BEI) framework as core evaluation criteria. Label design involved approximately 100 academic experts in industrial and recruitment psychology, while facial analysis assesses 10 non-verbal dimensions including confidence, composure, and self-assurance.
Addressing bias concerns, Lee outlined the quality control process. The team ensures women leaders participate evenly in training data, filters out cases where different evaluators rate identical video differently, and removes assessments that deviate significantly from peer evaluations. Post-development, the company conducts significance testing on randomly selected 100 video samples, comparing customer company evaluator ratings against AI scores. When judgment uncertainty arises, the AI returns “uncertain” and returns final decisions to human discretion.
On transparency, Lee stated the company has established internal ethics standards, disclosing the process to clients (except trade secrets), and structures allow clients to participate in AI development and evaluation. This creates stakeholder alignment.
The sensitive nature of facial data was also addressed. When questioned whether requiring consent for facial analysis amounts to coerced consent, Lee noted the company follows current legal procedures and acknowledged that AI cannot function without data. He added that the company is researching pixelation technology to ensure only AI can process facial information.
Regarding regulatory movements, Lee expressed the view that implementing verification procedures is preferable to allowing unverified companies to enter the market. He stressed that government support should follow companies that pass verification.