CAIO Interview — Genesislab's View on the EU AI Act's 'High-Risk' Classification
Digital Chosun interviewed Genesislab CAIO Yoo Dae-hoon on the EU AI Act's high-risk classification for HR AI. His take: 'Good, actually' — HR AI should be tightly governed, and regulation should be paired with market-economy incentives that reward vendors investing in trustworthiness first.
Source: Digital Chosun (디지틀조선일보) — EU AI Act Classifies AI Recruiting Solutions as ‘High-Risk’: What Do Vendors Think? Original in Korean
Digital Chosun interviewed vendors on how they view the EU AI Act’s ‘high-risk’ classification. Yoo Dae-hoon, Chief AI Officer (CAIO) at Genesislab, represented the company in the piece.
Under the EU AI Act, passed in February, systems used to screen job applications, evaluate candidates, and monitor employee performance or behavior are categorized as high-risk AI. The regulation takes effect on August 2, 2026, and non-compliance can draw penalties of up to 7% of global turnover or EUR 35 million.
Yoo’s reaction was straightforward: “Good, actually.” As he framed it, “High-risk AI is not AI that’s banned — it’s AI that carries enough risk that it has to be built and governed carefully. HR AI should be fair and safe. Tight governance fits the category.” He added a policy suggestion: “I’d like to see a market economy that rewards the companies that invest in trustworthiness and ethics first.”
The article also laid out Genesislab’s credentials as context: first in Korea to earn TTA’s AI Trustworthiness assessment, co-authored Korea’s AI Self-Assessment Checklist with KISDI, and has a documented internal investment track record on trust and safety for HR AI. Amazon’s 2014 gender-biased hiring model came up as a cautionary precedent that explains why getting ahead of the regulation matters.
The full regulatory detail and Yoo’s complete remarks are available in the original interview.