CAIO Interview — Genesislab's View on the EU AI Act's 'High-Risk' Classification

Digital Chosun interviewed Genesislab CAIO Yoo Dae-hoon on the EU AI Act's high-risk classification for HR AI. His take: 'Good, actually' — HR AI should be tightly governed, and regulation should be paired with market-economy incentives that reward vendors investing in trustworthiness first.

Share
CAIO Interview — Genesislab's View on the EU AI Act's 'High-Risk' Classification

Source: Digital Chosun (디지틀조선일보) — EU AI Act Classifies AI Recruiting Solutions as ‘High-Risk’: What Do Vendors Think? Original in Korean

Digital Chosun interviewed vendors on how they view the EU AI Act’s ‘high-risk’ classification. Yoo Dae-hoon, Chief AI Officer (CAIO) at Genesislab, represented the company in the piece.

Under the EU AI Act, passed in February, systems used to screen job applications, evaluate candidates, and monitor employee performance or behavior are categorized as high-risk AI. The regulation takes effect on August 2, 2026, and non-compliance can draw penalties of up to 7% of global turnover or EUR 35 million.

Yoo’s reaction was straightforward: “Good, actually.” As he framed it, “High-risk AI is not AI that’s banned — it’s AI that carries enough risk that it has to be built and governed carefully. HR AI should be fair and safe. Tight governance fits the category.” He added a policy suggestion: “I’d like to see a market economy that rewards the companies that invest in trustworthiness and ethics first.”

The article also laid out Genesislab’s credentials as context: first in Korea to earn TTA’s AI Trustworthiness assessment, co-authored Korea’s AI Self-Assessment Checklist with KISDI, and has a documented internal investment track record on trust and safety for HR AI. Amazon’s 2014 gender-biased hiring model came up as a cautionary precedent that explains why getting ahead of the regulation matters.

The full regulatory detail and Yoo’s complete remarks are available in the original interview.

Read more

단일 LLM의 한계를 넘어서: Multi-Agent System은 왜 필요한가

단일 LLM의 한계를 넘어서: Multi-Agent System은 왜 필요한가

단일 LLM으로 복잡한 비즈니스 문제를 해결하는 접근은 현실에서 쉽게 한계에 부딪힌다. 이 글에서는 단일 프롬프트부터 멀티 에이전트 시스템에 이르기까지 AI 아키텍처의 발전 단계를 분석하고, 각 구조가 왜 실패하거나 부족했는지 그 이유를 짚는다. 그리고 그 흐름 속에서 도출되는 멀티 에이전트 스케일링 법칙이 B2B 플랫폼 설계에 어떤 시사점을 주는지 살펴본다.