

The AI market is projected to become a $190 billion industry by 2025 ( according to Markets and Markets), and global spending on cognitive and AI systems is expected to reach $35.8 billion in 2029, an increase of 44.0% over the amount spent in 2018 (according to IDC). This research suggests AI is advanced and on the move, already being undertaken by large enterprises and ready to make an impact on how we live and work.
But it is still early days for AI when it comes to the implementation of AI in organisations and there are reasons for that. An AI system requires meticulous training before it can perform its intended function. When that function involves something as complex as making human-like judgments about images or videos – “seeing,” in other words – the system must be exposed to enormous volumes of accurately labeled and annotated training data. With AI becoming a growing enterprise priority, data science teams are under tremendous pressure to deliver projects, but frequently are challenged to produce training data at the required scale and quality.
Why do organizations face challenges in structuring data suitable for the AI strategy?
By Don Roedner