Why Frontier AI Needs Nineteenth-Century Sociology [StoryPack-Genesislab ⑦]
In the seventh StoryPack installment, AI research lead Jihyeong Yoo uses Max Weber's distinction between goal-oriented and value-based rationality to explain what AI still cannot do alone — and who has to do it instead.
The seventh StoryPack installment spotlights Jihyeong Yoo, who leads Genesislab’s AI research team. He holds the uncommon combination of AI engineering and sociology, and he pulls Max Weber’s nineteenth-century theory of rationality into modern AI work.
His distinction is straightforward. AI is good at goal-oriented rationality: reaching a specified target efficiently. Value-based rationality — deciding what is right and wrong in the first place — is work that still requires human judgment. When ChatGPT replies in an ethical-sounding way, Yoo says, it is not because the system understands, but because it has “learned to imitate appropriately.”
The headline takeaway is sharper. “No matter how advanced AI becomes, humans have to continuously monitor and adjust it. Society’s values keep shifting, so this process has no end.” Speed of technical progress alone is not enough; someone has to decide when and how to update the value baseline, and that is a social question.
The view ties back to Genesislab’s product direction. The company has been building ViewinterHR as a “personable AI” and holds Korea’s first TTA AI trustworthiness certification, both of which foreground transparency and emotional understanding. Frontier AI may look far from nineteenth-century sociology, but the article’s question — who designs the judgments technology cannot — sits closer than it seems.
Source: Digital Daily (디지털데일리) — An Unlikely Pairing: Frontier AI Meets 19th-Century Sociology [StoryPack-Genesislab ⑦]