<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Apache on Zio Ivan</title>
    <link>https://ivandemarino.me/tags/apache/</link>
    <description>Recent content in Apache on Zio Ivan</description>
    <generator>Hugo</generator>
    <language>en</language>
    <managingEditor>detronizator&#43;blog@gmail.com (Ivan De Marino, aka &#34;Zio Ivan&#34;, aka &#34;detro&#34;)</managingEditor>
    <webMaster>detronizator&#43;blog@gmail.com (Ivan De Marino, aka &#34;Zio Ivan&#34;, aka &#34;detro&#34;)</webMaster>
    <copyright>2004-2026 Ivan De Marino. Licensed under CC BY 4.0</copyright>
    <lastBuildDate>Sun, 19 Mar 2023 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://ivandemarino.me/tags/apache/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Kafkesc Updates: Docker, __consumer_offsets, byte parsing and Rust</title>
      <link>https://ivandemarino.me/posts/kafkesc-updates/</link>
      <pubDate>Sun, 19 Mar 2023 00:00:00 +0000</pubDate><author>detronizator&#43;blog@gmail.com (Ivan De Marino, aka &#34;Zio Ivan&#34;, aka &#34;detro&#34;)</author>
      <guid>https://ivandemarino.me/posts/kafkesc-updates/</guid>
      <description>&lt;p&gt;While I haven&amp;rsquo;t taken the time to blog since the &lt;a href=&#34;https://ivandemarino.me/2022/12/announcing-ksunami&#34;&gt;Ksunami announcement&lt;/a&gt;,
I have been ploughing away at various projects inside the &lt;a href=&#34;https://github.com/kafkesc&#34;&gt;Kafkesc&lt;/a&gt; organization,
and also continuing the side-objective of growing my &lt;a href=&#34;https://www.rust-lang.org/&#34;&gt;Rust&lt;/a&gt; skills.&lt;/p&gt;
&lt;p&gt;So, here is a recap of a few things I have released since. And also,
how is it leading to a substantial growth in my &lt;a href=&#34;https://www.rust-lang.org/&#34;&gt;Rust&lt;/a&gt; knowledge.&lt;/p&gt;
&lt;h2 id=&#34;ksunami-gets-an-official-docker-image&#34;&gt;Ksunami gets an official Docker image&lt;/h2&gt;
&lt;p&gt;In an attempt to make adoption easier, I setup &lt;a href=&#34;https://github.com/kafkesc/ksunami-docker&#34;&gt;ksunami-docker&lt;/a&gt; so that running
&lt;code&gt;ksunami&lt;/code&gt; can be ever easier; in Docker, Kubernetes or wherever you
need. For example:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Announcing Ksunami v0.1.x</title>
      <link>https://ivandemarino.me/posts/announcing-ksunami/</link>
      <pubDate>Wed, 14 Dec 2022 00:00:00 +0000</pubDate><author>detronizator&#43;blog@gmail.com (Ivan De Marino, aka &#34;Zio Ivan&#34;, aka &#34;detro&#34;)</author>
      <guid>https://ivandemarino.me/posts/announcing-ksunami/</guid>
      <description>&lt;p&gt;October this year, while I was in the process of
&lt;a href=&#34;https://www.linkedin.com/feed/update/urn:li:activity:6995482562605236224/&#34;&gt;changing job&lt;/a&gt;,
I started working on an open source project to monitor Kafka &lt;em&gt;consumer lag&lt;/em&gt;.
At &lt;a href=&#34;https://newrelic.com&#34;&gt;New Relic&lt;/a&gt;, a previous gig, we used &lt;strong&gt;a lot of Kafka&lt;/strong&gt;,
and we cared equally about monitoring its usage: there are some
&lt;a href=&#34;https://newrelic.com/blog/best-practices/new-relic-kafkapocalypse&#34;&gt;great articles&lt;/a&gt;
on New Relic own blogs, &lt;a href=&#34;https://newrelic.com/blog/search?s=kafka&#34;&gt;published over the years&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In the process, I realised that I needed a way to spin up a Kafka cluster for
development, &lt;em&gt;and&lt;/em&gt; I needed a producer of Kafka records, that was able to behave
in accordance to specific scenarios.&lt;/p&gt;</description>
    </item>
    <item>
      <title>TFZK - A Terraform Provider for Apache ZooKeeper</title>
      <link>https://ivandemarino.me/posts/tfzk/</link>
      <pubDate>Fri, 02 Dec 2022 00:00:00 +0000</pubDate><author>detronizator&#43;blog@gmail.com (Ivan De Marino, aka &#34;Zio Ivan&#34;, aka &#34;detro&#34;)</author>
      <guid>https://ivandemarino.me/posts/tfzk/</guid>
      <description>&lt;h2 id=&#34;gimme-the-tldr&#34;&gt;Gimme the TL;DR&lt;/h2&gt;
&lt;p&gt;A new Terraform provider is available, designed to interact with ZooKeeper ZNodes:
&lt;a href=&#34;https://registry.terraform.io/providers/tfzk/zookeeper/latest&#34;&gt;TFZK&lt;/a&gt;.
The latest stable version is &lt;code&gt;v1.0.3&lt;/code&gt;, and you should give it a go.&lt;/p&gt;
&lt;p&gt;Ah! And &lt;a href=&#34;https://registry.terraform.io/providers/tfzk/zookeeper/latest/docs&#34;&gt;here is the doc&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;ok-i-got-more-time---go-ahead&#34;&gt;OK, I got more time - go ahead!&lt;/h2&gt;
&lt;p&gt;Earlier this year I decided to scratch a long-standing itch: build a
&lt;a href=&#34;https://developer.hashicorp.com/terraform/language/providers&#34;&gt;Terraform Provider&lt;/a&gt;
for &lt;a href=&#34;https://zookeeper.apache.org/&#34;&gt;Apache ZooKeeper&lt;/a&gt;. While there was already
&lt;a href=&#34;https://registry.terraform.io/providers/ContentSquare/zookeeper/latest&#34;&gt;one&lt;/a&gt;,
it came with limitations that created issues in production environments:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Apache Hadoop on Mac OS X</title>
      <link>https://ivandemarino.me/posts/apache-hadoop-on-mac-os-x/</link>
      <pubDate>Sun, 20 Apr 2008 23:04:06 +0000</pubDate><author>detronizator&#43;blog@gmail.com (Ivan De Marino, aka &#34;Zio Ivan&#34;, aka &#34;detro&#34;)</author>
      <guid>https://ivandemarino.me/posts/apache-hadoop-on-mac-os-x/</guid>
      <description>&lt;p&gt;&lt;img alt=&#34;Hadoop&#34; loading=&#34;lazy&#34; src=&#34;http://hadoop.apache.org/images/hadoop-logo.jpg&#34;&gt;
For some reasons I started to play with &lt;a href=&#34;http://hadoop.apache.org/core/&#34;&gt;Apache Hadoop (Core)&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hadoop is a software platform that lets one easily write and run applications that process vast amounts of data.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Here&amp;rsquo;s what makes Hadoop especially useful:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Scalable:&lt;/strong&gt; Hadoop can reliably store and process petabytes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Economical&lt;/strong&gt;: It distributes the data and processing across clusters of commonly available computers. These clusters can number into the thousands of nodes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Efficient:&lt;/strong&gt; By distributing the data, Hadoop can process it in parallel on the nodes where the data is located. This makes it extremely rapid.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reliable&lt;/strong&gt;: Hadoop automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures.
Hadoop implements &lt;a href=&#34;http://wiki.apache.org/hadoop/HadoopMapReduce&#34;&gt;MapReduce&lt;/a&gt;, using the&lt;a href=&#34;http://hadoop.apache.org/core/docs/current/hdfs_design.html&#34;&gt;Hadoop Distributed File System (HDFS)&lt;/a&gt;. MapReduce divides applications into many small blocks of work. HDFS creates multiple replicas of data blocks for reliability, placing them on compute nodes around the cluster. MapReduce can then process the data where it is located.
Hadoop has been demonstrated on clusters with 2000 nodes. The current design target is 10,000 node clusters.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;I followed the &lt;a href=&#34;http://hadoop.apache.org/core/docs/current/quickstart.html&#34;&gt;Quickstart&lt;/a&gt; guide and I can confirm that it works on [en:Mac OS X] too, but I managed only to make it run in &amp;ldquo;&lt;a href=&#34;http://hadoop.apache.org/core/docs/current/quickstart.html#Standalone+Operation&#34;&gt;standalone&lt;/a&gt;&amp;rdquo; mode: usefull for first-stage development and debugging.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
