AI Trustworthiness Requires Skepticism and Active Verification
At IT Chosun's AI Biz Academy, Genesislab leader Yoo Ji-hyeong presented three criteria for trustworthy AI and emphasized that users must demand transparency and verify developers' data, algorithms, and processes.
Source: IT Chosun (IT조선) — Yoo Ji-hyeong, Genesislab leader: “AI trustworthiness demands skepticism and rigorous verification” Original in Korean
At the AI Biz Academy Season 1 seminar organized by IT Chosun, Genesislab leader Yoo Ji-hyeong presented approaches to addressing AI trustworthiness concerns. The presentation topic was “Risky Generative AI: How Should We Use It?”
Yoo outlined the technology lifecycle inherent in AI systems. When technology causes problems such as loss of life, trustworthiness plummets into what he termed the “valley of despair.” Only through expert effort does trustworthiness eventually reach the “plateau of sustainability.”
He identified three criteria for identifying trustworthy AI. First, whether the system uses deep learning as its foundation. Second, whether the company controls its own data pipeline. Third, whether the system is verifiable. Deep learning models discover patterns independently without fixating on specific elements, Yoo explained. However, these models absorb biases present in training data, making diverse, high-quality data pipelines essential.
Critically, Yoo emphasized: “AI trustworthiness problems cannot be detected before they occur.” He urged users to demand that developers demonstrate transparency across data, algorithms, and development processes for direct verification. AI companies themselves should continuously commission trustworthiness evaluations and apply AI development guidance frameworks in real-world deployments.