From Fedora Project Wiki

(bio update)
 
(3 intermediate revisions by the same user not shown)
Line 2: Line 2:
= Christiano Anderson =
= Christiano Anderson =


Free Software and data scientist from Porto Alegre, Brazil.  
Data engineer and free software developer from Berlin, Germany
 
I started using Linux in 1997, and Slackware was the first distro I used. I've also used Debian, SuSE, and RedHat before I switched to Fedora. Besides that, I professionally worked with different Unix flavours (BSD, Solaris, and SCO).
 
Today, I work with data (engineer, architect, development) and I rely on Python and Scala as my daily drivers. I like to build complex data pipelines, work with real-time data streaming, and do analysis on big amounts of data in near real time. I had the chance to work with Hadoop clusters when it was still popular, but today I mostly work with Spark and its internals, especially data lakes and data serialisation (distributing data in large scale using Apache Iceberg, Databricks Delta tables, etc).
 
I'm starting to maintain Python, Scala, AI/ML and data related packages to Fedora.


== Contact Info ==
== Contact Info ==
Line 8: Line 14:
'''Email''': canderson9@fedoraproject.org
'''Email''': canderson9@fedoraproject.org


'''Blog''': [http://christiano.me]http://christiano.me
'''Blog''': [https://christiano.dev]https://christiano.dev


'''IRC''': dump
'''IRC''': dump


'''GPG''': 0x48C104CF
'''Matrix''': @canderson9:fedora.im
 
'''GPG''': D1B95F5B952C53C3BC5745A8E710DE1555F69CEC

Latest revision as of 06:55, 26 August 2023

Christiano Anderson

Data engineer and free software developer from Berlin, Germany

I started using Linux in 1997, and Slackware was the first distro I used. I've also used Debian, SuSE, and RedHat before I switched to Fedora. Besides that, I professionally worked with different Unix flavours (BSD, Solaris, and SCO).

Today, I work with data (engineer, architect, development) and I rely on Python and Scala as my daily drivers. I like to build complex data pipelines, work with real-time data streaming, and do analysis on big amounts of data in near real time. I had the chance to work with Hadoop clusters when it was still popular, but today I mostly work with Spark and its internals, especially data lakes and data serialisation (distributing data in large scale using Apache Iceberg, Databricks Delta tables, etc).

I'm starting to maintain Python, Scala, AI/ML and data related packages to Fedora.

Contact Info

Email: canderson9@fedoraproject.org

Blog: [1]https://christiano.dev

IRC: dump

Matrix: @canderson9:fedora.im

GPG: D1B95F5B952C53C3BC5745A8E710DE1555F69CEC