Description
The "big data infrastructure" world is dominated by Java, but the data-analysis world is dominated by Python. So if you need to analyse and process huge amounts of data, chances are you're in for a less-than-ideal time. The impedance mismatch will probably make your life hard somehow.
So there are a lot of projects and companies trying to solve that problem. To bridge those two worlds seamlessly, and many of the popular solutions see SQL as the glue. But this week we're going to look at another solution - ignore Java, treat Kafka as a protocol, and build up all the infrastructure tools you need with a pure Python library. It's a lot of work, but in theory it would make Python the one language for data storage, analysis and processing, at scale. Tempting, but is it feasible?
Joining me to discuss the pros, cons, and massive scope of that approach is Tomáš Neubauer. He started off doing real time data analysis for the Maclaren's F1 team, and is now deep in the Python mines effectively rewriting Kafka Streams in Python. But how? How much work is actually involved in porting those ideas to Python-land, and how do you even get started? And perhaps most fundamental of all - even if you succeed, will that be enough to make the job easy, or will you still have to scale the mountain of teaching people how to use the new tools you've built? Let's find out.
–
Quix Streams on Github: https://github.com/quixio/quix-streams
Quix Streams getting started guide: https://quix.io/get-started-with-quix-streams
Quix: https://quix.io/
Tomáš on LinkedIn: https://www.linkedin.com/in/tom%C3%A1%C5%A1-neubauer-a10bb144
Tomáš on Twitter: https://twitter.com/TomasNeubauer0
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
Kris on Twitter: https://twitter.com/krisajenkins
--
#podcast #softwaredevelopment #datascience #apachekafka #streamprocessing