<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Codex Arcana]]></title><description><![CDATA[Software Development Arcanist Ideas]]></description><link>http://blog.eldermael.io/</link><generator>Ghost 5.61</generator><lastBuildDate>Fri, 17 Apr 2026 04:04:33 GMT</lastBuildDate><atom:link href="http://blog.eldermael.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Frequently Asked Questions During Mentorship Sessions]]></title><description><![CDATA[<p>During the past weeks I have spent about 2 hours per week on mentorship sessions with various developers from the <a href="https://github.com/devzcommunity/community?ref=blog.eldermael.io" rel="noopener ugc nofollow">Devz Community</a>. I am really thankful for everyone that chose me among all the mentors, and I took notes on most of the questions.</p><p>I immediately found frequently asked questions</p>]]></description><link>http://blog.eldermael.io/frequently-asked-questions-during-mentorship-sessions/</link><guid isPermaLink="false">632a6e90ab45ff006ba2bddc</guid><category><![CDATA[mentorship]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Fri, 17 Sep 2021 01:54:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1644132246573-bc75ce0a2946?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDMxfHxtZW50b3JzaGlwfGVufDB8fHx8MTY2NjExNTc0NA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1644132246573-bc75ce0a2946?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDMxfHxtZW50b3JzaGlwfGVufDB8fHx8MTY2NjExNTc0NA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Frequently Asked Questions During Mentorship Sessions"><p>During the past weeks I have spent about 2 hours per week on mentorship sessions with various developers from the <a href="https://github.com/devzcommunity/community?ref=blog.eldermael.io" rel="noopener ugc nofollow">Devz Community</a>. I am really thankful for everyone that chose me among all the mentors, and I took notes on most of the questions.</p><p>I immediately found frequently asked questions for all the topics that I listed, so I decided to write a blog post with them. In the future I plan to update this post with more questions as they come.</p><h1 id="technical-leadership">Technical Leadership</h1><h1 id="i-feel-i-am-programming-less-and-less-as-a-tech-lead-is-this-normal">I Feel I Am Programming Less And Less [As A Tech Lead], Is This Normal?</h1><p>Yes. Technical leaders are moving to a path where you are no longer just producing software i.e. an individual contributor. This is probably the most asked question regarding transition from senior developers to tech leads.</p><p>What&#x2019;s important here is a bit of time management skills. I suggest having a balanced schedule allowing new tech leads to still program while considering the other skills required. Here is an ideal example:</p><ul><li>30% of your time could be spent on the mentorship of your team members,</li><li>30% could be spent on meetings/product development,</li><li>30% could be spent on coding,</li><li>10% can be spent on growing your own skills.</li></ul><p>A good reference for this kind of transition for me has been the work of <a href="https://twitter.com/patkua?ref=blog.eldermael.io" rel="noopener ugc nofollow">Patrick Kua</a> on the topic. <a href="https://www.youtube.com/watch?v=iLS6NXMXtLI&amp;ref=blog.eldermael.io" rel="noopener ugc nofollow">Here</a> is the most influential talk I could find for this.</p><h1 id="what-skills-do-i-need-to-be-a-tech-lead">What Skills Do I Need to Be A Tech Lead?</h1><p>I think there are fundamentally 4 skills that you need to develop:</p><ol><li>Leadership skills, being technical or non-technical,</li><li>System Design and Architecture skills and,</li><li>Developer skills, which you may already have.</li><li>Time management skills to balance your time spent on these.</li></ol><h1 id="i-want-to-keep-coding-but-grow-my-career-is-there-a-way-to-do-that">I Want To Keep Coding But Grow My Career. Is There A Way To Do That?</h1><p>I think this entirely depends on your organization. If the only path for growth to happen is go into management, probably it will conflict with this particular interest of yours.</p><p>I have experienced this myself and usually Tech Leader positions conflict with time spent programming. If your particular organization has technical paths (such as Principal Engineer) you can try to move into that direction, but most companies do not have such paths.</p><p>You can always trailblaze the path by communicating clearly with your managers the pursuit of such goal. Some companies can accommodate you but others have different priorities thus they will conflict with yours.</p><h1 id="architecture">Architecture</h1><h1 id="as-an-architect-should-i-still-develop-software">As An Architect, Should I Still Develop Software?</h1><p>Yes. Undoubtedly you don&#x2019;t want to become the proverbial <a href="https://www.youtube.com/watch?v=v_nhv6aY1Kg&amp;ref=blog.eldermael.io" rel="noopener ugc nofollow">Ivory Tower Architect</a>. If you detach from programming completely, most probably your solutions will also be detached from the code teams produce and thus it becomes counterproductive because developers will be forced to retrofit your solution instead of fit it more organically.</p><h1 id="what-coding-tasks-an-architect-should-focus-on">What Coding Tasks An Architect Should Focus On?</h1><p>In my experience, the best architects I have worked with usually spend time on proofs of concepts and foundational technology. I have worked with architects that usually pair program with developers to unravel solutions that can fit the actual working code instead of coming up with solutions that may or may not fit the current architecture.</p><p>I think this approach has the best results overall because it proofs that your architecture guidelines and implementations fit into the software other teams are developing.</p><h1 id="how-can-i-know-if-my-architecture-is-successful">How Can I Know If My Architecture Is Successful?</h1><p><a href="https://www.thoughtworks.com/insights/articles/fitness-function-driven-development?ref=blog.eldermael.io" rel="noopener ugc nofollow">Fitness Functions</a> can be used to measure architecture goals. They also can be used to drive evolution to certain parts of the overall system once you know what to aim to.</p><p>An example of this is a fitness function that returns either the time, or the number of commits between your production code and the latest commit in your development environments. This function will let you know if you are properly applying Continuous Delivery.</p><p>Another example is <a href="https://www.thoughtworks.com/radar/techniques?blipid=201911044&amp;ref=blog.eldermael.io" rel="noopener ugc nofollow">Dependency drift fitness function</a>. It allows you to understand the Drift between microservices regarding libraries and dependencies which can be interpreted as an indicator of <a href="https://www.youtube.com/watch?v=5kwMgHuOaes&amp;ref=blog.eldermael.io" rel="noopener ugc nofollow">stagnation/ossification</a> of services.</p><h1 id="microservices">Microservices</h1><h1 id="what%E2%80%99s-the-point-of-microservices">What&#x2019;s The Point Of Microservices?</h1><p>This is a tough question, but I usually focus on these points that work well with the <em><em>evolution analogy</em></em>:</p><ol><li>Microservices are about <em><em>change independence</em></em>. As an example of a <a href="https://evolutionaryarchitecture.com/?ref=blog.eldermael.io" rel="noopener ugc nofollow">evolutionary architecture</a>, microservices allow evolution on different parts of your system quickly. Some services will be very stable once they reach a certain point but others need to often change to accommodate business needs. Again, some parts of the organism evolve while others remain stable.</li><li>Faster time to market. Following from the previous point, once you know that the microservice architecture style allows for faster change and independence you are able to quickly prototype new ideas because you can start new business lines from scratch quickly, <a href="https://www.youtube.com/watch?v=YLq3x-WtaRc&amp;ref=blog.eldermael.io" rel="noopener ugc nofollow">with the right tools (spanish)</a>.</li><li>Making innovation cheaper. As a side effect of the previous point, microservices enable to quickly prototype new ideas because you can start new business lines from scratch quickly with the right tools.</li></ol><h1 id="do-you-think-we-are-ready-to-adopt-microservices">Do You Think We Are Ready To Adopt Microservices?</h1><p>Microservices are not a free meal. <a href="https://en.wikipedia.org/wiki/Conway%27s_law?ref=blog.eldermael.io" rel="noopener ugc nofollow">Conway&#x2019;s law</a> is an important concern if you plan to adopt this architecture style. This is because there is a social/organizational component to every software architecture style and having organizations simply jump into microservices without the proper mindset can be problematic.</p><p>If you use Conway&#x2019;s law to your favor, service-based architectures can help move faster, but without organizational perspective it usually works against the adoption.</p><h1 id="devops">DevOps</h1><h1 id="what-are-the-most-common-questions-you-are-asked-during-interviews">What Are The Most Common Questions You Are Asked During Interviews?</h1><p>After quite a bit of interviews I have seen common questions and follow-ups:</p><ol><li>What is DevOps? The interviewer wants to know if you &#x201C;get DevOps&#x201D;. Focus on why DevOps is more of a practice and cultural mindset instead of titles.</li></ol><p>What is SRE? Software Reliability Engineering is kind of defined as:</p><blockquote><em><em>One could view DevOps as a generalization of several core SRE principles to a wider range of organizations, management structures, and personnel. One could equivalently view SRE as a specific implementation of DevOps with some idiosyncratic extensions.</em></em></blockquote><p>Per <a href="https://sre.google/sre-book/introduction/?ref=blog.eldermael.io" rel="noopener ugc nofollow">Google&#x2019;s own book on the matter</a>.</p><ol><li>What&#x2019;s the difference between DevOps and SREs? Per the previous question, but paraphrasing here, I think of SRE as an implementation of DevOps practices done by Google.</li><li>Can you describe/document a pipeline to deliver code from scratch? This is a lengthy question that may deserve its own post! it gets asked often.</li><li>What are SLAs and SLOs?</li></ol><h1 id="i-want-to-transition-to-a-%E2%80%9Cdevops%E2%80%9Dsre-position-what-should-i-study">I Want To Transition To A &#x201C;DevOps&#x201D;/SRE Position. What Should I Study?</h1><p>Most &#x201C;DevOps&#x201D; positions are really infrastructure positions. Most SRE positions are SysAdmin positions. Having said this, if you target infrastructure positions you should focus on:</p><ul><li>Learn how to work with a specific Cloud Provider. This implies learning how to manage and architect infrastructure on a specific provider. AWS has around <a href="https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/?ref=blog.eldermael.io" rel="noopener ugc nofollow">30% of the market share</a> as of the time of this post, so it maximizes your opportunity to land a job. This implies <a href="https://infrastructure-as-code.com/book/?ref=blog.eldermael.io" rel="noopener ugc nofollow">Infrastructure as Code</a> practices and tools to help you provision and manage such infrastructure.</li><li>Learn tools and practices regarding Continuous Integration and Continuous Delivery. Most interviews in the infrastructure space ask you to design a pipeline to deliver services. The goal would be to be able to design pipelines using Jenkins pipelines, Github actions, Gitlab pipelines. I recommend watching <a href="https://www.youtube.com/watch?v=po712VIZZ7M&amp;ref=blog.eldermael.io" rel="noopener ugc nofollow">Ken Mugrage</a> talk about modern CI/CD practices, then learn a specific tool.</li><li>Learn about Observability tools and practices. A good rule of thumb is to use tools from the <a href="https://landscape.cncf.io/?ref=blog.eldermael.io" rel="noopener ugc nofollow">Cloud Native Computing Foundation Observability And Analysis landscape.</a> Most interviews, in my experience, will ask you to configure not only instrumentation but also alerts and dashboards for services and infrastructure.</li></ul><p>If you are targeting sysadmin positions, they probably will overlap with the previous points but also require the following:</p><ul><li>Networking fundamentals and Software Defined Networking.</li><li>Operating System maintenance and provisioning (package management, configuration).</li><li>Container/VM Orchestration Tools (Kubernetes is the most popular nowadays).</li></ul>]]></content:encoded></item><item><title><![CDATA[Building A Web Service For A CO2 Sensor With A Raspberry Pi]]></title><description><![CDATA[<p>I recently wanted to introduce my daughters to programming, so I decided to use some kind of sensor to prototype a small application and teach them how to make hardware and software work in tandem as I believe having something physical would be more interesting than me typing on a</p>]]></description><link>http://blog.eldermael.io/building-a-web-service-for-a-co2-sensor-with-a-raspberry-pi/</link><guid isPermaLink="false">6341b1990494420070c87614</guid><category><![CDATA[iot]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Thu, 19 Nov 2020 00:00:00 GMT</pubDate><media:content url="http://blog.eldermael.io/content/images/2022/10/snapshot.png" medium="image"/><content:encoded><![CDATA[<img src="http://blog.eldermael.io/content/images/2022/10/snapshot.png" alt="Building A Web Service For A CO2 Sensor With A Raspberry Pi"><p>I recently wanted to introduce my daughters to programming, so I decided to use some kind of sensor to prototype a small application and teach them how to make hardware and software work in tandem as I believe having something physical would be more interesting than me typing on a REPL.</p><p>Now, I knew a raspberry pi had a way to connect sensors using its General-purpose input/output pins, so I decided to build a non-trivial application to get used to the programming model and at the same time have experience to build something more complex later.</p><h2 id="hardware-required">Hardware Required</h2><p>To start the project I did some research for sensors and while there were quite a bit of choices, I got interested in the <a href="https://www.keyestudio.com/keyestudio-ccs811-carbon-dioxide-temperature-air-quality-sensor-for-arduino-p0581.html?ref=blog.eldermael.io" rel="nofollow">KeyEstudio CCS811 Carbon Dioxide/ Air Quality Sensor for Arduino</a> which is compatible with Raspberry Pi 5V pins. It also works using the <a href="https://en.wikipedia.org/wiki/I%C2%B2C?ref=blog.eldermael.io" rel="nofollow">I&#xB2;C communication bus</a> which is also supported.</p><p>Now, in order to prototype faster, I got a <a href="https://www.amazon.com/gp/product/B07DL25MVQ/ref=ppx_yo_dt_b_asin_title_o05_s02?ie=UTF8&amp;psc=1&amp;ref=blog.eldermael.io" rel="nofollow">T-type breakout, a solderless board, and rainbow cable, plus jump wires</a>. With this at hand, I was able to start interfacing the GPIO with the sensor.</p><h2 id="wiring-it-all-together">Wiring It All Together</h2><p>Now, it was a while (since my college days) that I had work on a solderless board but fortunately there is a diagram to connect the sensor to an arduino board thus you can deduce how to wire it to the equivalent pins on a Raspberry Pi.</p><figure class="kg-card kg-image-card"><a href="https://github.com/ElderMael/codexarcana-blog/blob/master/content/blog/create-a-co2-sensor-with-raspberry-pi/sensor-connection.png?ref=blog.eldermael.io"><img src="https://github.com/ElderMael/codexarcana-blog/raw/master/content/blog/create-a-co2-sensor-with-raspberry-pi/sensor-connection.png" class="kg-image" alt="Building A Web Service For A CO2 Sensor With A Raspberry Pi" loading="lazy"></a></figure><p>With this diagram, and the official Raspberry Pi documentation, I was able to find the correct pins without much issue. For reference, here is the Raspberry Pi GPIO pin diagram (this is for Raspberry Pi 4 Model B).</p><figure class="kg-card kg-image-card"><a href="https://github.com/ElderMael/codexarcana-blog/blob/master/content/blog/create-a-co2-sensor-with-raspberry-pi/gpio-pins.png?ref=blog.eldermael.io"><img src="https://github.com/ElderMael/codexarcana-blog/raw/master/content/blog/create-a-co2-sensor-with-raspberry-pi/gpio-pins.png" class="kg-image" alt="Building A Web Service For A CO2 Sensor With A Raspberry Pi" loading="lazy"></a></figure><p>Finally, with the help of the T-type breakout, and the solderless board, here is a picture of the connections. Note that the breakout already labels the pins so is not hard to match them.</p><figure class="kg-card kg-image-card"><a href="https://github.com/ElderMael/codexarcana-blog/blob/master/content/blog/create-a-co2-sensor-with-raspberry-pi/solderless-board-connections.jpeg?ref=blog.eldermael.io"><img src="https://github.com/ElderMael/codexarcana-blog/raw/master/content/blog/create-a-co2-sensor-with-raspberry-pi/solderless-board-connections.jpeg" class="kg-image" alt="Building A Web Service For A CO2 Sensor With A Raspberry Pi" loading="lazy"></a></figure><h2 id="validate-connections-to-raspberry-pi-with-i2cdetect">Validate Connections To Raspberry Pi with i2cdetect</h2><p>Now, with everything wired, the fastest way to detect if the sensor is properly interfacing to the I&#xB2;C bus is to run a utility which can be installed on the Raspberry Pi Os named <a href="https://linux.die.net/man/8/i2cdetect?ref=blog.eldermael.io" rel="nofollow"><code>i2cdetect</code></a>. Here is the command:</p><!--kg-card-begin: markdown--><pre><code class="language-shell">
pi@elderserver:~ $ i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- 5a -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --

</code></pre>
<!--kg-card-end: markdown--><p>The way to interpret the output of the command, and the parameters is the following:</p><ul><li><code>i2cdetect</code> takes a positional argument named I&#xB2;C bus, in this case it maps to a linux device by number depending on your board. On the Raspberry Pi 4, it is the device number 1 located at <code>/dev/i2c-1</code>. The <code>-y</code> option is to disable the interactive mode.</li><li>The output of command represents the bus addresses in which the detected components can be found. In this case, the only address responding is <code>5a</code>, which matches the vendor specified address for the CO2 sensor. You can find more details in the datasheet of the <a href="https://cdn.sparkfun.com/assets/learn_tutorials/1/4/3/CCS811_Datasheet-DS000459.pdf?ref=blog.eldermael.io" rel="nofollow">CCS811 sensor</a>.</li></ul><blockquote>All I&#xB2;C transactions must use the (7 bits) slave address 0x5A or 0x5B depending on status of ADDR pin when writing to and reading from the CCS811.</blockquote><h2 id="read-data-from-the-sensor">Read Data From The Sensor</h2><p>This was the tricky part of the project as the documentation in the <a href="https://wiki.keyestudio.com/KS0457_keyestudio_CCS811_Carbon_Dioxide_Air_Quality_Sensor?ref=blog.eldermael.io" rel="nofollow">wiki of the seller</a> had mostly C code which somehow was factually incorrect once I compared it to the <a href="https://www.sciosense.com/wp-content/uploads/2020/01/CCS811-Application-Note-Programming-and-interfacing-guide.pdf?ref=blog.eldermael.io" rel="nofollow">official programming and interface guide</a>. I struggled about a day using the wiki, so I finally started to search for the sensor documentation instead of Arduino IDE examples.</p><h3 id="typescript-i%C2%B2c-and-prometheus">TypeScript, I&#xB2;C And Prometheus</h3><blockquote>The full source code can be found in this <a href="https://github.com/ElderMael/co2-sensor-pi?ref=blog.eldermael.io">Github repository.</a></blockquote><p>In order to work with I&#xB2;C sensor I decided to use the help of the Node ecosystem using a small library called <a href="https://www.npmjs.com/package/raspi-i2c?ref=blog.eldermael.io" rel="nofollow"><code>raspi-i2c</code></a> which allows sending and reading from devices attached to the bus in a very procedural way, with sync and async functions.</p><p>To build the web service itself I used Express to handle the HTTP requests coming from Prometheus and a small library called <a href="https://www.npmjs.com/package/@tailorbrands/node-exporter-prometheus?ref=blog.eldermael.io" rel="nofollow"><code>node-exporter-prometheus</code></a> to generate the Gauge metric type to expose the sensor data.</p><h2 id="programming-the-web-service">Programming The Web Service</h2><h3 id="the-express-server">The Express Server</h3><p>An Express web service is not really that difficult, the relevant part is here:</p><pre><code class="language-javascript">const app = express();

app.use(promExporter.middleware);
app.use(readSensorMiddleware(i2c));
app.get(&apos;/metrics&apos;, promExporter.metrics);

app.listen(serverPort, () =&gt; {
    initSensor(i2c);
    console.log(`server started at http://localhost:${serverPort}`);
});</code></pre><p>We basically create an Express application, then register a middleware to gather the common metrics using <code>node-exporter-prometheus</code> and register our own middleware to read the sensor before finally exposing an endpoint under <code>/metrics</code>.</p><h3 id="initializing-the-sensor-a-quick-premier-of-i%C2%B2c">Initializing The Sensor, A Quick Premier Of I&#xB2;C</h3><p>The <code>initSensor</code> function will apply the initialization logic required for the sensor to start reading the environmental data. It follows the following diagram:</p><figure class="kg-card kg-image-card"><a href="https://github.com/ElderMael/codexarcana-blog/blob/master/content/blog/create-a-co2-sensor-with-raspberry-pi/state-machine.png?ref=blog.eldermael.io"><img src="https://github.com/ElderMael/codexarcana-blog/raw/master/content/blog/create-a-co2-sensor-with-raspberry-pi/state-machine.png" class="kg-image" alt="Building A Web Service For A CO2 Sensor With A Raspberry Pi" loading="lazy"></a></figure><p>For this, the <code>raspi-i2c</code> library provides two basic methods to interact with the I&#xB2;C sensor:</p><ol><li><code>writeSync(address: int, register: int, buffer: Buffer): void</code> this method allows writing to I&#xB2;C devices by their addressed, to a specific register.</li><li><code>readSync(address: int, register: int, length: int): Buffer</code> this method allows reading a specific register of I&#xB2;C devices by their address.</li></ol><p>These are the two methods required to initialize the sensor. As it can be inferred, the I&#xB2;C sensor is very simple to interact with. As long as you have the sensor address (in our case <code>5a</code>), and a register to read/write from, you will be able to initialize and read the required data.</p><p>This small snippet allows to read the hardware id from the sensor and check is the correct one:</p><pre><code class="language-javascript">// SENSOR_ADDRESS is 0x5a, HARDWARE_ID_REGISTER is 0x20
const hardwareIdBuffer = i2c.readSync(SENSOR_ADDRESS, HARDWARE_ID_REGISTER, 1);
const hardwareId = hardwareIdBuffer[0];

if (hardwareId !== SENSOR_HARDWARE_ID_MAGIC_NUMBER) { // This is 0x81
   console.log(&quot;Hardware ID did not match: &quot;, hardwareId);
   // ... call error handler
}</code></pre><p>As you can see, this is very straightforward. You choose either if you want to read or write from a sensor and how many bytes. You pass a <a href="https://nodejs.org/api/buffer.html?ref=blog.eldermael.io" rel="nofollow">Buffer</a> or get one depending on the operation. After that is just a matter of inspecting the bytes returned according to the <a href="https://cdn.sparkfun.com/assets/learn_tutorials/1/4/3/CCS811_Datasheet-DS000459.pdf?ref=blog.eldermael.io" rel="nofollow">datasheet</a>.</p><h3 id="read-data-from-the-sensor-1">Read Data From The Sensor</h3><p>Finally, the <code>readSensorMiddleware</code> will read both the CO2 and TVOC output from the sensor data address and set the Prometheus Gauges along any errors found. Here is the relevant snippet of code:</p><pre><code class="language-javascript">// SENSOR_ADDRESS is 0x5a, STATUS_REGISTER is 0x00
const statusRegisterReading = i2c.readSync(SENSOR_ADDRESS, STATUS_REGISTER, 1);
// bit at position 4 signals new data is ready
const isDataReady = bitwise.integer.getBit(statusRegisterReading[0], 4);

// If data is not ready check errors
if (isDataReady === 0) {
    checkErrorRegister(i2c);
    return;
}

// Read Data, 8 bytes, the first two bytes have the co2 reading
const buffer = i2c.readSync(SENSOR_ADDRESS, RESULT_DATA_REGISTER, 8);

// Convert the first two bytes to a 16 bit integer with big endianess
const co2Reading = buffer.readUInt16BE();

// Set the Prometheus Gauge to the result
co2Gauge.set(co2Reading);</code></pre><h2 id="endpoint-result-and-grafana">Endpoint Result And Grafana</h2><p>Now, with everything wired together, here is the resulting payload when sending an HTTP request (omitting comments and other metrics).</p><pre><code class="language-shell">$ http --body GET http://192.168.1.20:8080/metrics
co2_ppm{appName=&quot;co2-sensor-pi&quot;} 405
tvoc_ppb{appName=&quot;co2-sensor-pi&quot;} 0
message_invalid_errors{appName=&quot;co2-sensor-pi&quot;} 6
read_reg_invalid_errors{appName=&quot;co2-sensor-pi&quot;} 0
meas_mode_invalid_errors{appName=&quot;co2-sensor-pi&quot;} 0
max_resistance_errors{appName=&quot;co2-sensor-pi&quot;} 1
heater_fault_errors{appName=&quot;co2-sensor-pi&quot;} 1
heater_supply_errors{appName=&quot;co2-sensor-pi&quot;} 2
unknown_errors{appName=&quot;co2-sensor-pi&quot;} 2</code></pre><p>Once we have Prometheus scrapping the metrics, here is the resulting Grafana dashboard:</p><figure class="kg-card kg-image-card"><a href="https://github.com/ElderMael/codexarcana-blog/blob/master/content/blog/create-a-co2-sensor-with-raspberry-pi/dashboard.png?ref=blog.eldermael.io"><img src="https://github.com/ElderMael/codexarcana-blog/raw/master/content/blog/create-a-co2-sensor-with-raspberry-pi/dashboard.png" class="kg-image" alt="Building A Web Service For A CO2 Sensor With A Raspberry Pi" loading="lazy"></a></figure>]]></content:encoded></item><item><title><![CDATA[Project Kickstarters for Microservices]]></title><description><![CDATA[<p>Previously in my post about <a href="https://medium.com/@eldermael/digital-platform-accelerators-a897ec4b92f4?source=post_page---------------------------" rel="noopener">Digital Platform Accelerators</a>, I wrote about Project Kickstarters. In this post, I will try to get deep into the patterns I have seen and implemented.</p><p>In many companies I have worked we usually implement authentication and authorization, logging, telemetry, etc. I have implemented these in</p>]]></description><link>http://blog.eldermael.io/project-kickstarters-for-microservices/</link><guid isPermaLink="false">6341b0690494420070c875fe</guid><category><![CDATA[microservices]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Thu, 01 Aug 2019 17:17:00 GMT</pubDate><media:content url="http://blog.eldermael.io/content/images/2022/10/imagen_2022-10-08_121625127.png" medium="image"/><content:encoded><![CDATA[<img src="http://blog.eldermael.io/content/images/2022/10/imagen_2022-10-08_121625127.png" alt="Project Kickstarters for Microservices"><p>Previously in my post about <a href="https://medium.com/@eldermael/digital-platform-accelerators-a897ec4b92f4?source=post_page---------------------------" rel="noopener">Digital Platform Accelerators</a>, I wrote about Project Kickstarters. In this post, I will try to get deep into the patterns I have seen and implemented.</p><p>In many companies I have worked we usually implement authentication and authorization, logging, telemetry, etc. I have implemented these in many project templates as a means of showing the capabilities of the platform on which the software runs and the fundamental structures that it provides to layout cross-cutting concerns. The idea behind project kickstarters is that they already give you an implementation of these concerns that is ready to be used and allows developers focusing into implementing business logic.</p><h1 id="reasoning-reference-architecture">Reasoning: Reference Architecture</h1><p>Many of the problems I have had with Software Architects is that whenever I have worked in a big company with many teams, architecture is usually <em><em>retrofitted</em></em> not <em><em>implemented</em></em>. The proverbial Ivory Tower Architect is one that gets a cross-cutting concern and then thinks of the solution without taking a look at the code that is already implemented. Thus their solutions mechanically clash with the existing interfaces.</p><p>Good reference architecture, on the other hand, starts with solutions and research. Proofs of concept are usually a good starting point to generate templates because they are necessarily implementing something that we want to reach. If by some reason we cannot implement a solution to a problem, it means that our architecture is not fitted for our context.</p><p>I have also met really good Software Architects and a common pattern I have seen is that they always say &#x201C;let us prove that this is possible, that we can implement it&#x201D; on one way or another. Good project templates serve as reference architecture because you can see them deployed and working on a Software Platform, visit their repositories and understand how they integrate to the platform.</p><h1 id="project-templates">Project Templates</h1><p>Project templates are not a new concept but most probably they became more necessary with the advent of the microservices architecture. If you have a Digital Platform, good architecture mandates a series of cross-cutting concerns that conceptually can be grounded into templates that should require to do minimal customization to start the development of a new service.</p><p>I have seen many patterns for these but the most common is a single git repository in which architects and devs alike pour all the best practices they have learned for development along with sample integrations with technologies within the Digital Platform such as authentication or telemetry. These repositories usually are the product of spikes to know if certain technology can be a good fit for the projects or work that has been done previously.</p><p>Nonetheless, project templates are the lowest common denominator to use as Platform Accelerators because they require a lot of customizations to actually become an independent microservice. Cloning a project and do a lot of renaming is probably an error-prone task as the smart reader will notice.</p><h1 id="project-generators">Project Generators</h1><p>Once you have achieved enough maturity regarding project templates, the next logical step is to create tooling that uses these templates and produces ready-to-use projects.</p><p>The focus of this post is to talk about them in the context of Digital Platforms so tools like Micronaut or Gradle initializers are not going to be described because, as far as I know, they do not allow you to customize the projects they already produce with features specific to your context.</p><p>Tooling available to create project generators is varied. I have used mostly Yeoman and Atomist. But there is also Maven Archetypes and Micronaut <code>mn</code> utility, Gradle <code>init</code> subcommand, etc. Because the focus of this article goes along the lines of Digital Platforms, tools that do not provide a way to integrate platform-specific features are not discussed e.g. Micronaut <code>mn.</code></p><h2 id="yeoman">Yeoman</h2><p>Yeoman is a project scaffolding tool similar to Maven archetypes or Rust Cargo. In principle Yeoman is really easy to use as it works with templates and an &#x201C;in memory&#x201D; file system that allows you to stream files from a predefined location and apply transformations to them.</p><p>Whatever you can do with memory-fs editors you can do with all the files using Yeoman. I have also used sub-generators as feature toggles to allow generation of code with specific features on them.</p><p>The drawback of using templates is that you will not have a good feedback cycle unless you generate a project from scratch and then run the generated project and hopefully run a good test suite contained within the project. For example, if Yeoman generates a front end application you will want yo run <code>npm run build</code>or if you are generating a Gradle project then running <code>./gradlew build</code>. These will allow you to know if your code compiles and their tests are running fine with the set of features you want. All of this takes time so I would not recommend templating files at all.</p><p>What has worked best for me is to download a repository into the directory where Yeoman is expecting the template files to be and then do the replacing and renaming. This ensures that at least the starting point of your code generation is a project that hopefully is already in a good state (by having this project have a pipeline and a battery of tests that actually work).</p><p>The following problem here is making sure that the generated project is not in a bad state and soon enough you will have to have a pipeline that generates and discards projects after running tests on them.</p><p>Finally, the tools Yeoman gives you are very basic. String replacing, file renaming and so forth are the bare minimum for project generators and soon enough I had found a lot of complexity while trying to make more sophisticated things such as properly remove/add features to a project depending on the user needs. You could theoretically create projects that have only the features some devs will need but this becomes a complex problem due to the number of permutations that happens with these projects.</p><h2 id="atomist">Atomist</h2><p>Atomist approach to code generation takes the idea of independent seed projects that you take and apply code transformations to them. This is no templating mechanism so Atomist proposes using Microgrammars and AST transformations to automate the changes to the project.</p><p>While I agree that you could use other AST transformations using Yeoman directly using Vynil streams ASTs, I do think that the API Atomist provides, along Microgrammars are more simple and powerful to use as they provide really good abstractions over code projects and code transformations.</p><p>Using a Software Delivery Machine, you can use Git repositories as the starting point for a new project. By applying code transformations to the source code, you will produce a fresh project. The SDM will orchestrate the transformations and produce a new Git repository and also can push it to a service such as Github or Bitbucket ready to be used.</p><p>In a later post, I will discuss approaches to code project generators and probably code examples to create feature toggles using them.</p>]]></content:encoded></item><item><title><![CDATA[Fighting Microservices Drift On Digital Platforms]]></title><description><![CDATA[<p>Previously in my post about <a href="https://medium.com/@eldermael/digital-platform-accelerators-a897ec4b92f4?source=post_page---------------------------" rel="noopener">Digital Platform Accelerators</a>, I wrote about Distributed Refactoring Tools. In this post, I will try to describe the different tools I have used and the circumstances that lead me to use them. Also, why they are necessary.</p><p>I have worked on four microservice projects on</p>]]></description><link>http://blog.eldermael.io/fighting-microservices-drift-on-digital-platforms/</link><guid isPermaLink="false">632a6f92ab45ff006ba2bdf1</guid><category><![CDATA[devops]]></category><category><![CDATA[microservices]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Thu, 01 Aug 2019 01:57:00 GMT</pubDate><media:content url="http://blog.eldermael.io/content/images/2022/09/splash-ghost-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://blog.eldermael.io/content/images/2022/09/splash-ghost-1.jpg" alt="Fighting Microservices Drift On Digital Platforms"><p>Previously in my post about <a href="https://medium.com/@eldermael/digital-platform-accelerators-a897ec4b92f4?source=post_page---------------------------" rel="noopener">Digital Platform Accelerators</a>, I wrote about Distributed Refactoring Tools. In this post, I will try to describe the different tools I have used and the circumstances that lead me to use them. Also, why they are necessary.</p><p>I have worked on four microservice projects on different roles. From front-end and backend developer, tester, infrastructure developer, and a mix of all of them. While I worked on three different companies during those projects, I almost always found the same problems appearing.</p><p>The need to share code between services using platform libraries and the consequent maintenance that comes with them. The need to create policy through pipelines that evolve constantly and end up breaking project builds due to backward-incompatible changes. And finally, the need to update configurations for both pipelines and microlibs on several projects at a time.</p><p>In hindsight, all these problems mostly come from the same source: <strong><strong>drift</strong></strong>. Be it <em><em>configuration drift</em></em> due to a myriad of properties files or configuration sources; dependency hell due to<em><em> library version drift;</em></em> or <em><em>delivery drift</em></em> caused by feature parity between projects and the pipelines required to deliver them.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/max/963/0*aAeyMEubOi-AHcDF.jpg" class="kg-image" alt="Fighting Microservices Drift On Digital Platforms" loading="lazy"><figcaption>Driftwood, Wikimedia Commons.</figcaption></figure><h1 id="why-does-this-kind-of-drift-happen">Why Does This Kind Of Drift Happen?</h1><p>It&#x2019;s because <em><em>you share code and settings</em></em>. Configuration drift happens when you have different settings or initialization data for your services or infrastructure. Dependency drift evolves to dependency hell when you have different versions of the same library and you share those versions on other software projects. Finally, delivery drift happens when your services cannot be delivered with the same pipeline definition over time.</p><p>The organic nature of software is what is causing drift. At some point, we have created strategies for conveying meaning on change e.g. versioning and more specifically semantic versioning. With these versions, the goal is to keep up to date with the latest release because it should be the most stable or has the latest features that help us evolve our platform.</p><p>The problem with semantic versioning is very misleading regarding two things: it&#x2019;s not always the most stable and in most programming languages there is no way to enforce it. Even Joe Armstrong, of Erlang fame, tried to find ways to enforce that even a function was exactly the same as other function to enforce compatibility.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Two functions f and g might be the same<br>if [f(1),f(2),f(3),...f(100)] is the same as [g(1).g(2),g(3),...,g(100)]<br><br>If f and g are of type int -&gt; int then we can reduce the inputs and output lists to SHAs and easily check if f and g might be the same <a href="https://t.co/rrgY0TuqUP?ref=blog.eldermael.io">https://t.co/rrgY0TuqUP</a></p>&#x2014; Joe Armstrong (@joeerl) <a href="https://twitter.com/joeerl/status/931910539051175936?ref_src=twsrc%5Etfw&amp;ref=blog.eldermael.io">November 18, 2017</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>The latest release of a library is not always the most stable due to the nature of modifying software itself. Regressions happen, not breaking compatibility with previous releases is a goal but there are no guarantees.</p><p>What I think is that semantic version, even if well-intentioned, is not sufficient if you want to avoid drift on your dependencies or configuration. The smart reader has probably also thought about dependency locking. Some tools have already integrated this feature so that every time you build a software project it will always resolve the same dependencies. The problem with version locking is that it poses the problem of maintenance.</p><p>Version locking and the organic nature of software conflict with each other. If you lock dependencies, sooner or later you need to upgrade them or they will become obsolete. It&#x2019;s on your best interest to not be out of date with security patches and bug fixes.</p><h1 id="distributed-refactoring-as-means-to-ease-drift">Distributed Refactoring As Means To Ease Drift</h1><p>Unless you are on a monorepo (which has different challenges, out of the scope of this post), there are many constraints to evolve your software and keep up to date in a Microservices architecture.</p><p>The first problem with Microservices is that, unlike the monolith, you need to have platform libraries. The share-nothing approach could be useful but leads to a lot of duplicated efforts and even more drift because everyone will be tempted to solve the same problems with different strategies.</p><p>Platform libraries are implementations of cross-cutting concerns at a platform level. With a monolith, AOP used to be the way to achieve this, but you cannot intercept microservice calls without adding more overhead between services. Examples of this are authentication implementations, logging, monitoring, telemetry, etc. Even the pipeline definitions of build servers such as Jenkins are a way of policy enforcing and they require updates.</p><p>Now that you have implemented cross-cutting concerns using those libraries, keeping them up to date will be the next challenge. This is where distributed refactoring comes into play.</p><p>If you are into a monorepo, refactoring can be a single commit and it&#x2019;s atomic. Updating libraries in different repositories will not be atomic and it will be difficult to do by hand if you have more than a few microservices to update. Even if updating different repositories is not atomic, automating these updates as much as possible will save a lot of developer hours.</p><p>I remember working on a single pipeline to deploy only a certain type of microservices. Every time I had to add a feature, it required modifying at least a dozen of services of that type. That could take anything from a couple of hours to an entire day depending on the definition and the size of the change. Even with shared pipeline libraries, updating the versions used is still a task that requires effort per microservice.</p><h1 id="the-tools-i-have-used-to-execute-distributed-refactoring">The Tools I Have Used To Execute Distributed Refactoring</h1><h2 id="atomist">Atomist</h2><p>I have used Atomist at very early stages in my second microservices project. At that time I worked as a tester and we needed to use clients for a huge amount of SOAP services that provided auto-generated code.</p><p>Unfortunately, while we could generate code from those services because I was working as a tester, I knew that the service contract was going to change eventually and I did not want to create builders by hand for the myriad of objects that represented the response and request for those services.</p><p>Then I found Atomist and I found that I could create software that transforms software, I created a small program that added the required annotations to the generated code.</p><p>While this approach helped me a lot, on the way I discovered many other features that Atomist has that have helped me resolve all the problems that I have faced always during the development of microservices.</p><p>Atomist fingerprints allow you to keep track of the different versions of the platform libraries being used and with this drive updates. Configuration drift can also be tracked with Atomist. But Atomist goes beyond just tracking, you can create Code Transformations to keep projects from drifting if required with pull requests that developers can accept when ready (or just apply them if you feel empowered to do the change yourself).</p><p>Generating pull requests to update libraries using microgrammars to look for specific parts of Gradle build files or NPM package.json files is possible and is very simple to do.</p><p>Delivery drift can also be tackled by having Atomist fix/add step definitions and Jenkins pipeline versions to keep projects up to date with the latest policy in your platform.</p><p>Finally, I have been experimenting on creating backward compatibility with project kickstarters by annotating new platform features and make Atomist copy such features from seed/reference architecture projects.</p><p>This is important because project kickstarters and reference architecture start to drift the moment that you used them to start a new microservice. If users want to use new platform features, in most of the projects we have to do this manually with migration guides and manual steps. If you can automate adding new features, for example, the use of a secret registry such as Hashicorp Vault, you could save a lot of developer hours invested into integrating that feature from reference architecture services.</p><p>I have been in only one project where they used Snyk and it solves the problem of dependency version drift but with an approach focused on security. I really do not have much experience using the tool but I can vouch that it helped many other developers to keep dependencies up to date when critical security issues arrive.</p><p>Just recently we have had a huge amount of pipelines failing due to security scans made by OWASP and keeping the relevant libraries can consume a lot of time. While most of the time we need to ignore the problem because even the newest version does not resolve it, Snyk is capable of fixing them automatically with patches later on.</p><p>I became aware of rewrite only a few days ago and it looks very powerful. But it only works on Java projects. Its API looks fantastic for distributed refactoring and I have seen presentations were it could refactor all the Guava references of a method on GitHub!</p><p>I am researching this project for future use as one of my current project goals is to provide compatibility with other projects thus I can see the potential of rewrite to refactor calls to microlibraries we use to provide platform features.</p><h1 id="conclusion">Conclusion</h1><p>Fighting microservices drift is a daunting task and while I have seen and implemented various approaches to this and it is still very difficult. I believe that keeping team independence is great, automated refactoring possibly could be better.</p><p>I remember in the first project I had to go and review all the front-end code of various codebases and create comments for developers to update their dependencies to introduce some features dictated by the platform. My surprise is that while some teams fixed the issue right away, others did not. In this case, I lacked the authority to enforce or apply the changes myself so that went on for a while until things started to fail.</p><p>In further projects, I had more power to do this but I still find that going to each repository and apply the fixes by hand is very error-prone and this is not the ideal scenario because drift happens in many places at a time and all of a sudden. That moment was when I started to look for tools that allowed me to refactor code automatically or allow me to speed the process.</p><p>My point is that even if you could keep things up to date for a while yourself, the scale of a microservice architecture will reach such a mass that doing things automatically is a better alternative than putting the effort to do it and feel the pain.</p>]]></content:encoded></item><item><title><![CDATA[Shared Pipeline Libraries To Deploy Microservices]]></title><description><![CDATA[<figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Interesting interview. But often a &quot;CI/CD  pipeline&quot; actually means 100 or 1000 separate pipelines. So the oil needs to be applied in many places. Hence we need to be able to express team-wide policy <a href="https://t.co/PlDFIVmmIj?ref=blog.eldermael.io">https://t.co/PlDFIVmmIj</a></p>&#x2014; Rod Johnson (@springrod) <a href="https://twitter.com/springrod/status/1115458163627646977?ref_src=twsrc%5Etfw&amp;ref=blog.eldermael.io">April 9, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>Previously in</p>]]></description><link>http://blog.eldermael.io/shared-pipeline-libraries-to-deploy-microservices/</link><guid isPermaLink="false">6341b1620494420070c8760d</guid><category><![CDATA[microservices]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Fri, 12 Jul 2019 17:21:00 GMT</pubDate><media:content url="http://blog.eldermael.io/content/images/2022/10/imagen_2022-10-08_122038830.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><img src="http://blog.eldermael.io/content/images/2022/10/imagen_2022-10-08_122038830.png" alt="Shared Pipeline Libraries To Deploy Microservices"><p lang="en" dir="ltr">Interesting interview. But often a &quot;CI/CD  pipeline&quot; actually means 100 or 1000 separate pipelines. So the oil needs to be applied in many places. Hence we need to be able to express team-wide policy <a href="https://t.co/PlDFIVmmIj?ref=blog.eldermael.io">https://t.co/PlDFIVmmIj</a></p>&#x2014; Rod Johnson (@springrod) <a href="https://twitter.com/springrod/status/1115458163627646977?ref_src=twsrc%5Etfw&amp;ref=blog.eldermael.io">April 9, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>Previously in my post about <a href="https://medium.com/@eldermael/digital-platform-accelerators-a897ec4b92f4?ref=blog.eldermael.io" rel="noopener">Digital Platform Accelerators</a>, I wrote about Delivery Workflow Tools. A pattern within these tools was Shared Pipeline Libraries.</p><p>So far I have implemented pipelines on three different microservice platforms for three different companies and I intend to explain my experience in this post.</p><h1 id="reasoning-do-not-repeat-yourself-and-avoid-a-death-by-a-thousand-cuts">Reasoning: Do Not Repeat Yourself And Avoid A Death By A Thousand Cuts</h1><p>About three years ago I was working on a project that had started with relatively few services, only five of them and three front end applications. By instructions of the CTO at the time, I was assigned to the testing team as a quality engineer in charge of writing Cucumber tests for a series of Web Services made using JAX-WS that were going to be &#x201C;refactored&#x201D; into modern technologies.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">So, had an interview once with a CIO of a company a couple of years back for a DevOps position.<br><br>He decided I was not good enough because I did not write a Dockerfile by hand on a whiteboard so he sent me to the testing team.<br><br>1/?</p>&#x2014; Mael &#x1F1F2;&#x1F1FD; (@eldermael) <a href="https://twitter.com/eldermael/status/1101705360933568517?ref_src=twsrc%5Etfw&amp;ref=blog.eldermael.io">March 2, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>I am glad to say that I proved my CTO wrong because I wanted to go to the team in charge of building the tooling that these new services were going to use. This team was branded the DevOps team and they were going to be in charge of provisioning the infrastructure and maintain a single Jenkins server that will be used for all of this. By implementing the pipeline for running my own tests against the web services I was able to ask for a chance to work on that team because they were understaffed at the moment.</p><p>I was very new to Jenkins so the first thing that we tried to do was to program the Jenkinsfiles for each project ourselves in the eight repositories that were available.</p><p>Soon I found that I had a lot of repetition on these files. I mostly had copy-pasted the code there and changed a few things such as project names and other parameters.</p><p>As you can see this was not good but at the time the idea was that the teams in charge of those projects were going to write themselves those files once we had set up the patterns to deploy to the infrastructure we provided. Of course, software is organic and more requirements come and pipelines need to evolve thus we had to oil those Jenkinsfiles many times to add new features such as security scans, SonarQube quality gates, etc.</p><p>Our developers were busy delivering value features so the work of our team was to introduce patterns for these features and hopefully they will implement them in their Jenkinsfiles. Alright? For the clever reader, yes, we had to also write these in each repository Jenkinsfile.</p><h1 id="jenkins-shared-libraries">Jenkins Shared Libraries</h1><p>Updating every repository with these new features implied that we first worked on one repository at a time by modifying the Jenkinsfile by hand and then run the pipeline to see if it worked. Remember, I was very new to Jenkins and the rest of my team was very much understaffed so they were busy gathering requirements and trying to get order out of the chaos.</p><p>I started to research Jenkins shared libraries that will allow me to define the common steps and encapsulate them into a single parameterizable abstraction.</p><p>We started to replace many stages in our Jenkinsfiles with a single line of code with the new custom steps. This development outcome was that we finally started to remove duplication from our Jenkinsfiles but our pipelines still look very much alike.</p><p>During my last two weeks at that company, I started to implement the whole pipeline as a single step. This required to standardize the projects running such pipeline definitions to use the same tooling and define the same build system tasks.</p><h1 id="testing-steps-and-definitions">Testing Steps And Definitions</h1><p>My next project had Jenkins up and running already and it had a really nice shared library but in hindsight, testing by triggering the pipeline time and time again is not very efficient because you will be waiting a lot of time for the pipeline to reach the steps that you are interested.</p><p>What we found is that testing Jenkins shared libraries is a very complex matter because from the beginning Jenkins introduced this feature without any mechanism to test them in isolation. Having to use Groovy is also a challenge as the dynamic nature of the language makes you test double on finger errors such as the number of parameters and their types.</p><p>Fortunately, this is not a new problem thus there are projects that ease these steps.</p><h2 id="jenkinscijenkinspipelineunit"><strong><strong>jenkinsci/JenkinsPipelineUnit</strong></strong></h2><h3 id="framework-for-unit-testing-jenkins-pipelines-contribute-to-jenkinscijenkinspipelineunit-development-by-creating-an%E2%80%A6">Framework for unit testing Jenkins pipelines . Contribute to jenkinsci/JenkinsPipelineUnit development by creating an&#x2026;</h3><p>github.com</p><p>We extensively used jenkinsPipelineUnit to test our workflows and it helped us mainly to avoid regression bugs but testing the integration with external systems (such as SonarQube, Fortify, Terraform, AWS) still was very time-consuming.</p><h1 id="convention-over-configuration-plus-flexible-dsls">Convention Over Configuration Plus Flexible DSLs</h1><p>Once I experienced a somewhat mature shared library, I started a new project which required to scale things to a new level as the number of microservices was many times more than I had seen so far. When this situation happens there are always going to be edge cases.</p><p>These can be in the form of projects that require to skip steps, projects that require a slightly different task order and various other changes. When you start to see this problem emerging a useful pattern is creating your DSL for pipelines within the Digital Platform of your company.</p><p>This means that instead of defining steps with pipeline definitions, you will define a DSL that will be focused on the Platform and following Convention over Configuration to detect the needs of a project.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://medium.com/media/8ecf583510b80be7985a920d108e947b?ref=blog.eldermael.io"><div class="kg-bookmark-content"><div class="kg-bookmark-title">openshiftPipeline.groovy &#x2013; Medium</div><div class="kg-bookmark-description">GitHub Gist: instantly share code, notes, and snippets.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://medium.com/favicon.ico" alt="Shared Pipeline Libraries To Deploy Microservices"></div></div></a></figure><p>This is also a highly conceptual task but I can enumerate a few things that we have had implemented:</p><ul><li>Detect project types and configure the steps that run depending on the type</li><li>Detect requirements of the project from its structure e.g. if you have a file with a database definition, apply automatically a step to provision it within your orchestrator</li><li>Inspect for tasks and skip if they are not found, for example, check if there is a Gradle task for checking style and execute it if needed</li><li>Apply fixes to common mistakes within projects and alert developers early of this problem through notifications. For example, fail/touch files required for the pipeline to run properly like required manifests</li></ul><p>All these integrations are very context-dependent thus they will change with your platform. It is important to keep backward compatibility through the support of older models as you evolve the pipeline DSL.</p><p>Again, the important part here is that the DSL reflects the standardized way you want application pipelines to behave to support your organization workflow i.e. they become <a href="https://medium.com/@eldermael/digital-platform-accelerators-a897ec4b92f4?ref=blog.eldermael.io" rel="noopener">Delivery Workflow Tools</a>.</p>]]></content:encoded></item><item><title><![CDATA[Digital Platform Accelerators]]></title><description><![CDATA[<p>A Digital Platform is described as the set technologies that function as the context in which the company applications run. A combination of middleware that serves as the foundation for the different products and APIs that are part of an organization.</p><p>We have had Platforms as a Service (PaaS) for</p>]]></description><link>http://blog.eldermael.io/digital-platform-accelerators/</link><guid isPermaLink="false">6341b2200494420070c8762d</guid><category><![CDATA[Accelerators]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Fri, 17 May 2019 17:24:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1498887960847-2a5e46312788?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDE3fHxzcGVlZHxlbnwwfHx8fDE2NjYxMTUwMjY&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1498887960847-2a5e46312788?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDE3fHxzcGVlZHxlbnwwfHx8fDE2NjYxMTUwMjY&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Digital Platform Accelerators"><p>A Digital Platform is described as the set technologies that function as the context in which the company applications run. A combination of middleware that serves as the foundation for the different products and APIs that are part of an organization.</p><p>We have had Platforms as a Service (PaaS) for a while and I have worked on three digital transformations whose objective is to deliver applications and the platform to sustain them.</p><p>During those transformations, I have observed <em><em>Digital Platform Accelerators</em></em> emerging as patterns and tools that help you to reduce the friction created by deploying and managing many new services into a Digital Platform. The objective is clear: they provide faster access to use the platform itself.</p><p>Here are some types of accelerators I have implemented and used many times during these projects.</p><h1 id="project-kickstarters">Project Kickstarters</h1><p>In order to create services fast, enterprises have created mechanisms to build software applications using conceptual templates that can be deployed to a platform. Such templates are the aggregate of the technologies, best practices, and integrations required to run inside the Digital Platform.</p><p>I have implemented a few Project Kickstarters using different mechanisms and patterns and here are some of those:</p><h2 id="project-templates">Project Templates</h2><p>The idea is to have a project that serves as a blueprint of the typical application deployed in your platform. This project is cloned and after some modifications by hand, you get a repository that can be used to create new services in your platform.</p><p>Many times these templates are divided by technology stack. Maven archetypes are a good example of this pattern but a significant difference is that the project templates <em><em>contain integrations that are required for the application to run inside the platform</em></em>.</p><p>These integrations differ from company to company but the usual cases are authentication, authorization, database integrations, caching. As you can deduce from these, we are talking about cross-cutting concerns that a Digital Platform should provide.</p><h2 id="project-generators">Project Generators</h2><p>Eventually, you will have as many templates as technology stacks. And these templates will evolve as the platform starts to change to integrate more cross-cutting concerns. The logical next step is to provide platform users with a way to generate applications without the error-prone task of modifying a Project Template by hand.</p><p>Enter Project Generators. These applications will take a set of parameters that change between applications in your Platform and produce a repository ready for deployment.</p><p>Another benefit is that a generator is software too so it can be used to remove or add features from the generated projects. For example, if the new application does not require a database, the generator can remove the code that provides database access from the codebase.</p><p>Feature generation is a complex topic and it&#x2019;s very hard to implement. Generators tend to grow complex if many features are provided thus is always better to keep them simple or have template projects to pull code from and transform them.</p><p>My dislike for tools such as Yeoman is that they work on a very low level by mostly manipulating text files as templates or low-level string replacement. This had lead to many bugs in which replacing a string in many files and over many layers can lead to very nasty bugs that are hard to resolve.</p><p>Atomist provides the next step of this kind of tooling in my opinion as it allows you to use AST transformations for code and also string replacements if your needs are not that sophisticated.</p><p>Examples of this are Yeoman, Atomist, JHipster, <a href="https://start.spring.io/?ref=blog.eldermael.io" rel="noopener ugc nofollow">https://start.spring.io/</a>.</p><h1 id="delivery-workflow-tools">Delivery Workflow Tools</h1><p>In order to be deployed, generators need also a standardized pipeline from your lower environments to production. A typical problem in Digital Platforms is trying to establish this pipeline as your projects may differ in very little details that need to be captured by your delivery pipeline. Here are some useful patterns for achieving this:</p><h2 id="custom-build-system-integrations-and-tasks">Custom Build System Integrations And Tasks</h2><p>You may have tasks small enough that can be integrated into your build automation tools such as Gradle/NPM. You encapsulate your steps in the tool instead of the pipeline and it gets executed by calling a task defined there.</p><p>These tasks usually conform to mandated designs in order to progress the applications through the platform. Examples of these tasks are running isolation and API tests, OWASP dependency checks, publishing to different platform environments, packaging the application in the preferred format for the servers/containers, etc.</p><p>A possible example of this tooling is implemented in Netflix Nebula. I also have created custom Gradle plugins that encapsulate many steps required to build applications and deploy them to a platform such as Kubernetes/Openshift.</p><h2 id="shared-pipeline-libraries">Shared Pipeline Libraries</h2><p>CI/CD tools such as Jenkins provide you with mechanisms to create shared libraries that can store common steps required to deploy applications. These libraries are pulled from your CI/CD server such as Jenkins and have definitions of shared steps that are used across your applications. They also can provide templates of fully-fledged pipelines that can consume your source code repositories and deliver artifacts up to production.</p><p>Unfortunately, shared pipeline libraries can sometimes be very complex and many developers I know think these are against the DevOps way of thinking as the platform is somehow mandating the structure of your pipeline.</p><p>On the bright side, Shared Pipeline Libraries avoid the problem of distributed refactoring inside applications within a Digital Platform. Imagine having to modify the pipeline definitions of every application if you introduce a new stage required for all of them. An example of this is security scanners mandated by your security engineers.</p><p>Unfortunately, creating these shared steps and pipelines requires a lot of effort because they need to be as general as possible but eventually you will have small edge cases. Maybe you need a different database in one project. Maybe you need to skip certain stages in another project.</p><p>Most sophisticated Shared Pipeline Libraries I have coded use <em><em>Convention Over Configuration</em></em> and small DSLs to build standardized projects. This removes the friction from deploying as new teams can have a newly generated project delivered into production in minutes.</p><h2 id="delivery-workflow-clis">Delivery Workflow CLIs</h2><p>Over time, delivery steps in a pipeline start to get more complex as bash commands/scripts are being abused. In order to use a fully fledged programming language, a CLI tool is created that is called from the pipeline instead to execute more complicated stages.</p><p>The workflow of the application delivery and integration is captured in these CLIs and they sometimes mirror the stages of the pipeline. The benefit of this is that you are decoupling your delivery process from the CI/CD thus you could potentially run the CLI locally and achieve the same result, helping you with debugging and get faster feedback than waiting for the pipeline to run.</p><p>Another benefit is that the CLI can accommodate different technology stacks while keeping the complexity low because you are not tied to shell scripts.</p><p>NEWT (Netflix Workflow Toolkit) is a possible example of this pattern.</p><h2 id="shared-build-images">Shared Build Images</h2><p>If you have created a standardized pipeline or workflow CLI chances are that all the required tooling is scattered in different places. For example, if you are using Jenkins you will have Jenkins agents for each technology stack or even for each tool.</p><p>This becomes a maintenance burden sooner or later for your DevOps/Infrastructure team thus a pattern I have seen emerging is to accommodate the bulk of your tools into a single Docker image/VM Template. If you only have a single place to add tooling and a single image to use in your CI/CD server you only will have a place to modify to add tooling required for new platform features.</p><p>This is important as many tools such as security scanners and tools to deploy applications such as kubectl are required regardless of the technology stack. You only pay the price of building this tool once (even if it is a lot of time) but you can easily update agent definitions and orchestrate pipeline changes that use the required tooling.</p><h1 id="distributed-refactoring-tools"><strong>Distributed Refactoring Tools</strong></h1><p>Unless you are working with a monorepo, if you introduce a feature to your Digital Platform you will have to look for their dependents and update them accordingly to use your new features.</p><p>An example of this is typically when there is a need to introduce new stages in your delivery pipelines such as security scans. If you have a shared pipeline library or a CLI you might only need to update their versions but this has to be done across all the projects that use your pipeline and/or CLI.</p><p>Project Generators and Templates also only provide the first steps for your project. As they are updated with new features, projects that are already live won&#x2019;t benefit from these features.</p><p>This implies touching several repositories and creating the commits to update existing projects with new cross-cutting concerts and platform features.</p><p>In the worst case scenario, this process has to be done manually by someone. Once a new feature is available in the underlying platform, many projects are mandated to use this feature thus their respective developers are tasked to do the required refactoring. If you have Workflow CLIs or Shared Pipeline Libraries it may only require you to update to the latest version.</p><p>There are a few tools that provide distributed refactoring:</p><h2 id="atomist">Atomist</h2><p>Atomist has the concept of editors which are pieces of code that transform code. While an oversimplification, these editors can produce pull requests to the required projects thus solving the distributed refactoring problem.</p><p>I have been working with Atomist for around three years and during this time the tool has been rewritten a couple of times. That kept me from adopting it for some projects but currently, the documentation and the tool are very powerful and complete.</p><h2 id="snyk">Snyk</h2><p>Snyk has auto fixes for security vulnerabilities in the Java ecosystem. This is a more focused type of refactoring but Snyk also manages multiple projects. The need to fix vulnerabilities is something obliged in an enterprise setting and Snyk has been my tool of choice for such requirements.</p>]]></content:encoded></item><item><title><![CDATA[The Many Problems Of Jenkins, Reloaded]]></title><description><![CDATA[<p>Having worked with different automation servers for the past 3 years, specifically Jenkins during the last one, I have come to realize several problems with Jenkins coming from experience in the trenches. In this article, I explain some of them.</p><h1 id="steep-learning-curve-and-frustrating-documentation">Steep Learning Curve and Frustrating Documentation</h1><p>I honestly think that</p>]]></description><link>http://blog.eldermael.io/the-many-problems-of-jenkins-reloaded/</link><guid isPermaLink="false">6341b26a0494420070c8763b</guid><category><![CDATA[Jenkins]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Sat, 25 Aug 2018 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Having worked with different automation servers for the past 3 years, specifically Jenkins during the last one, I have come to realize several problems with Jenkins coming from experience in the trenches. In this article, I explain some of them.</p><h1 id="steep-learning-curve-and-frustrating-documentation">Steep Learning Curve and Frustrating Documentation</h1><p>I honestly think that Jenkins is hard to learn. Part of this is that the documentation is hard to grasp from a beginner perspective. I attempted to learn Jenkins using the official documentation. I found that it lacked a coherent structure and the examples leave many of details outside. I decided that I was not making any progress trying to follow it after a month reading it.</p><p>Finally, I decided to read a book about Jenkins and ditch the documentation for a while. While the book helped me with a structure that I couldn&#x2019;t find in the docs, my surprise is that even relatively new books leave many features outside while only touching some of them. With this I mean essential features such as how to not stop executors while waiting for user input! I also had the problem of having my Jenkins up to date and finding that the book won&#x2019;t always match the current documentation or that the documentation has been adding new ways to do the same thing.</p><h1 id="how-do-you-automate-jenkins-setup-seriously">How Do You Automate Jenkins Setup, Seriously?</h1><p>Working in a cloud-like environment means that I need to create templates/golden images of the software that we use. Remember the whole cattle versus pets concept? How do you do this with Jenkins? Being as old as it is, Jenkins suffers from the design problems of the software designed before the cloud era. By this, I mean that it&#x2019;s designed for manual installation and you can see for all the quirks that you have to go through when installing it with an automated script.</p><p>Creating a Salt Stack state for it was such a nightmare that ended giving up. On my own time, I tried to create first a Vagrantfile and then a Dockerfile to automate this. It is an awful process that involves things such as writing a file with a specific version number to disk so that Jenkins can skip its initial setup wizard altogether (later versions also require running a groovy script at startup to set the correct state). This is crazy, just looks at all the answers that you get in sites such as StackOverflow or Server Fault! I probably have read dozens of questions and articles with the same question.</p><p>Another problem is setting up credentials, configurations, and jobs themselves. Even if you have a Jenkinsfile helping you to define the structure of the job, you still need a way to recreate the jobs from scratch. Fortunately, I found that Jenkins has a CLI but the documentation about it hides several details. The CLI itself requires knowledge of how Jenkins stores its configuration (tip, it&#x2019;s XML) and I would say that is pretty bad itself because there is no documentation about the XML structure of many settings in Jenkins. I also had to read the source code to find the Javaclasses that mapped to it.</p><p>Being disappointed by the CLI, I found an OpenStack plugin to create what I needed: job configuration outside Jenkins in source control and that could be automated. However, doing this was not an easy task, and sometimes I thought that it was too difficult to provide this using Jenkins.</p><p>I could continue on and on but I think the point is clear: a Jenkins setup is hard to create, maintain and automate. Even with CloudBees support you still have many problems ahead of you.</p><p>In conclusion, Jenkins is not built with automation for itself in mind. I hate namedropping so I will mention that there are automation servers that require a single configuration file in YAML to store their configuration (even with secret backends). Others will simply need to backup a directory.</p><h1 id="jenkins-blue-ocean-is-great-but-not-there-yet">Jenkins Blue Ocean is Great but Not There Yet</h1><p>I liked the interface provided by Blue Ocean and the visibility it gives you in contrast of just watching walls of text over and over. However, that is it, if you want to do something else you need to go back to the old UI which is terrible until you get used to it, again, coming fresh to Jenkins does not help, and nothing is intuitive or easy to discover.</p><p>I think this contrast with the overall Jenkins experience for beginners and here is my case: I had coworkers that had years of experience with Jenkins, and they all hated Blue Ocean. My opinion is that because they were already used to all the quirks I am talking about, they saw it as a dumbed down version of what they had. While I agree with them on this, this is a testament to how Jenkins was never designed with simplicity in mind.</p><h1 id="the-never-ending-scripted-versus-declarative-pipeline-problems">The Never-Ending Scripted Versus Declarative Pipeline Problems</h1><p>Writing this is kind of thematic, old versus new. If you, like me, came to Jenkins just recently, you will find that there are two ways of writing the same pipeline. As with programming languages, the price of freedom is complexity. The freedom that comes with two ways of doing the same thing are many and here are some of them.</p><p>Having a mix of engineers with previous experience with Jenkins and people coming fresh to it is a tricky situation, to say the least. The scripted pipeline is full of idiosyncrasies and quirks that do not translate well to the new declarative pipeline. Concepts change names (e.g., Node and Agent), plugins do not have support for the declarative way.</p><p>For complex tasks, I had to create blocks of scripts that then we refactored later to pipeline libraries, but we used many plugins created before the new syntax, so there was no way out of script blocks because we created Groovy objects that were needed between stages. It is a hard pill to swallow because there is no clear, intended way to build Jenkinsfiles consistently and the documentation sometimes won&#x2019;t tell you which syntax they are using and the examples many times do not translate well between them.</p><h1 id="yes-the-plugin-ecosystem-is-awful">Yes, The Plugin Ecosystem is Awful</h1><p>Plugins for authentication, locking resources, tooling, Docker (it has many of them), SSH, et. al. You name it and Jenkins will have a plugin for it. Then you find that many plugins do not work well with each other or are complete abandonware.</p><p>Is the documentation any better? No, many plugins have only an example or two and many of my coworkers and I had to read the source code (fortunately we were well versed in Java, but other coworkers weren&#x2019;t).</p><p>With the advent of CloudBees and the concept of blessed plugins I think that we are solving this problem but there is a lot of time until Jenkins works smoothly.</p><p>At the end of the day we could just replace many plugins with shell scripts but this comes with the penalty of using too much time for something that was promised to work.</p><h1 id="cicd-concepts-are-missing">CI/CD Concepts Are Missing</h1><p>I remember having a project before this one in which one of our principal DevOps engineer convinced managers of using other automation server because Jenkins did not provide concepts such as fan-in and fan-out out of the box.</p><p>At first I was skeptical but having worked in a project with a microservice-like architecture I can say that this is a real problem that will cause you headaches. We have to create independent architecture guidelines and tooling to deal with fan-in. This is the reason of projects such as Jenkins Docker and Spinnaker. I remember the now defunct page of Concourse CI comparing the design of servers which had pipelines as first class citizens and the state of Jenkins a year ago (when I started using it). It makes so much sense now because it described it as an afterthought, something that Jenkins was never planed to do but they ended up retrofitting it without thinking that the problem was bigger than that.</p><h1 id="the-conclusion">The Conclusion</h1><p>I wish that in my next project I won&#x2019;t use Jenkins. Having spent a year doing CI/CD with it I can say that I am tired. Now my coworkers know that I can take any task and do it using Jenkins with success but this was a year long process that was full of frustration and hard lessons. I look forward for the competition to finally be as popular as Jenkins and force it to improve more than just changing its syntax for jobs. I want a Jenkins that does not have all the problems that I mentioned.</p>]]></content:encoded></item><item><title><![CDATA[DevOps Against The Tide]]></title><description><![CDATA[<p>In retrospect, I can say that I have worked on only one software development project that was successful. Only once have I tasted the victory of deploying code to production (and it was excellent code!). Most of my career consists of working for different consulting companies of different sizes, and</p>]]></description><link>http://blog.eldermael.io/devops-against-the-tide/</link><guid isPermaLink="false">6341b30c0494420070c87649</guid><category><![CDATA[devops]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Thu, 16 Aug 2018 17:27:00 GMT</pubDate><content:encoded><![CDATA[<p>In retrospect, I can say that I have worked on only one software development project that was successful. Only once have I tasted the victory of deploying code to production (and it was excellent code!). Most of my career consists of working for different consulting companies of different sizes, and most of my work has been in software delivery contracts, i.e., coding software for clients of different sizes and cultures.</p><p>Many things have happened in those projects. One project ran out of budget. Another one was shut down after a year of development because it had constant delays and had 300 people working on it. On another occasion we had an excellent project that was doing very well, but the client decided to change from creating a new platform to &#x201C;refactoring&#x201D; (not really) the existing project.</p><p>Nonetheless, the defeat that has weighed on me the most was a recent project in which we had to swim against the tide in a DevOps team.</p><p>The thing with experience is that it teaches you many things that you shouldn&#x2019;t do. Moreover, having worked with many different people as a consultant, I have also learned behaviors in people and teams that lead to disaster, antipatterns in social interaction and problems that arise between teams inside the same organization.</p><p>In this particular project, I could identify early on many things that immediately raised the red alert.</p><h1 id="having-separate-teams-for-infrastructure-and-development">Having Separate Teams for Infrastructure and Development</h1><p>Infrastructure development and DevOps are tightly coupled. I would like to say that DevOps is a philosophy and we need to create a term for infrastructure development, i.e., creating servers, managing clouds, all those things. However, I like to be pragmatic, and most DevOps roles in the industry are seen as infrastructure development first with a touch of tooling for delivering software pipelines.</p><p>I think that the worst problem of them all was that the team in charge of the infrastructure and its tooling was a separate team. That team also had very strict idiosyncracies about virtual machines that I can only attribute to a very systemic, operational way of thinking.</p><p>For example, as ridiculous as it sounds, we did not have access to create virtual machines ourselves. We were the team in charge of creating and provisioning the infrastructure and platform for several applications yet restricted to the most bureaucratic system I have seen. We had to create tickets for creating servers in bulk. However, even calling this creating them in bulk would be generous, as those servers were snowflakes created manually with very particular and irreproducible processes that were unknown to us.</p><p>Because of all of this, reverting the state of those virtual machines was also very challenging. Handoffs and updates of those servers frequently broke functionality.</p><p>I have seen the benefits of immutable servers and phoenix servers, but these practices simply eroded under the control of the team in charge of the infrastructure.</p><h1 id="too-many-chefs-in-the-kitchen-and-the-lack-of-accountability">Too Many Chefs in the Kitchen and the Lack of Accountability</h1><p>The role of Architect in this project lead to many problems during our development. As the team in charge of software delivery pipelines and platform development, we were the glue among teams requiring consensus for delivering working software. There is no way to achieve such a thing with more than 17 software architects with clashing decisions.</p><p>We had fruitless, long lasting meetings that lead nowhere. For a place where they value authority so much as to have so many architects, it was impossible to have them accountable for working software.</p><p>Conway&#x2019;s Law applies once more to this project. Most software produced by this organization was a chain of delegation from the top of the hierarchy, in which nothing is achieved, to finally realize the functionality hidden by layers upon layers of indirection and senseless delegation created by developers. This hierarchy is the most rigid pyramid of bureaucracy that I have seen in all my years of software development.</p><p>One more thing that I would like to describe is how many architects became the proverbial &#x201C;Ivory Tower Architect.&#x201D; They have been away from coding so long that things such as git were alien to them. I also can attest that some of them are the classic example of the Peter principle: promoted not because of software development and architecture proficiency, but because they have been in the same place for years.</p><h1 id="tools-considered-harmful">Tools Considered Harmful</h1><p>Imagine that you are a carpenter. To work efficiently, you need to use specific tools that are intrinsic to your trade. Can you imagine being hired to do some work in a house, arrive and then being stripped of these tools because they are considered harmful by the owners of the house? It sounds ridiculous, but these kinds of policies were in place due to security constraints and some architectural decisions of the software development practice in place.</p><p>Many of us know the corporate IT world where the git protocol is considered harmful, so we have to rewrite our URLs to use HTTP instead. However, this is not even the tip of the iceberg; access to Github also was banned because it could contain malicious software. You had to do everything behind the software repository server, so we had to proxy several things such as the central Maven repo, the NPM registry, and Red Hat repositories. We also had to store binaries of different projects for provisioning our CI/CD server with those tools.</p><p>More extreme cases required us to repackage software ourselves so we could install it, ask vendors to customize installers for other software so it could be proxied through our repository server. Many packages also had to download binaries from the internet, requiring us to spend several weeks looking for workarounds to download those dependencies, rewrite the software hardcoded URLs, updating them to binaries in the repository server.</p><p>I can say with confidence that we spent several weeks customizing software for the sake of security. Unfortunately, most software assumes that the internet is available to download dependencies, but this was not the case in this organization.</p><h1 id="certificates-and-pki-done-poorly">Certificates and PKI, Done Poorly</h1><p>If there is something that almost any tool uses universally nowadays is Public Key Infrastructure. If you go back to the previous section, you can see that most tools I mentioned require valid certificates to work correctly.</p><p>Would you like to make almost everything fail while trying to create a software pipeline or an infrastructure platform? Simply deploy your own PKI, using your own Certificate Authority and require every connection to use an intermediate certificate.</p><p>The problem is not PKI itself, the problem is that if you enforce connections to go through this intermediate certificate, you need to have an excellent automated process to provide such certificates. We did not have such nicety. We had to create certificates manually for each server created.</p><p>To make things worse, at least in Red Hat, even if you provide certificates in the system-wide repository, some software rolls its own repository or ignores it altogether. For example, we have Java and its Key and Certificate Management Tool. Python also has the certifi bundle of certificates. Jenkins may install its own versions of the JRE (again, Java). You also have the quirks of Red Hat and install certificates. HashiCorp Consul and much other software require certificates from the same CA explicitly set in the configuration.</p><p>I think we wasted much time provisioning certificates for every new tool in our hands. I cannot even recount how many times we first had to configure our tools to bypass SSL and then figure a way to provide the required certificate to connect to whatever you needed.</p><p>Finally, we had a tool to generate those certificates automatically but we did not use it: HashiCorp Vault. The problem was that it was not under our control, but I will address why this is a problem later.</p><h1 id="the-result-is-low-quality">The Result Is Low Quality</h1><p>Software development and the DevOps practice require effort to be successful but when you have all of these technical and organizational problems you are doomed to fail.</p><p>One thing that keeps bothering me is that I never had the authority to push best practices and technical decisions forward. Even if I know better, places with such strict hierarchies are impossible to navigate if you are seen as a resource, awaiting orders.</p><p>The low quality of the software I had to write, full of workarounds, visibly counterintuitive desitions, is laughable at best. I can only end this saying that technical debt as this, which is not paid in time, is not technical debt at all. It&#x2019;s low-quality software.</p>]]></content:encoded></item><item><title><![CDATA[Notes On Mental Overhead And Build “Heroes”]]></title><description><![CDATA[<p>Previously I wrote about problems presented due to a lack of trust between software developers, giving some examples of complex build systems. One topic I did mention is that having a complicated build process can create unbearable mental overhead for the programmers working on such a project.</p><p>So, what is</p>]]></description><link>http://blog.eldermael.io/notes-on-mental-overhead-and-build-heroes/</link><guid isPermaLink="false">6341b3530494420070c87657</guid><category><![CDATA[Mental Overhead]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Tue, 28 Nov 2017 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Previously I wrote about problems presented due to a lack of trust between software developers, giving some examples of complex build systems. One topic I did mention is that having a complicated build process can create unbearable mental overhead for the programmers working on such a project.</p><p>So, what is mental overhead in software development? Simplifying the meaning a bit: it&#x2019;s the amount of effort you have to put in order to hold the mental model of a software system in your working memory. We also call it cognitive load.</p><p>This load is the reason we use such things as abstractions in software: they allow us to offload entire parts of a system of our working memory. It&#x2019;s also the reason we, as programmers, value simplicity.</p><h2 id="the-heroes">The Heroes</h2><p>Going back to my comment about the build process in the previous article, it became so complicated over time, that they had to separate two programmers from the rest to maintain it. Now, I could only deduce those programmers were very good at offloading that problem from others, and they eventually became the&#x201C;heroes&#x201D; of the build system. The problem here is not trying to make things simpler but keep things the way they are for the sake of stability; they became so accustomed to such complexity that they started to see it as something ordinary.</p><p>I honestly think such thing is never going to be good. This kind of environment makes people very much stubborn due to the perceived stability because when people do not know how to lead, they try to control.</p><p>The second problem is that because heroes are mostly regarded in such high esteem, contradicting their opinions is something seen as heretical. While I agree on being opinionated about software development practices, being opinionated while appealing to fear is something I think it&#x2019;s not productive or useful because it will probably stop disruption and innovation and it&#x2019;s not the right reason at all.</p><h2 id="the-two-real-problems-complexity-and-fear">The Two Real Problems: Complexity and Fear</h2><p>I have seen that complexity becomes fear eventually. These heroes are there to make things safe and stable at the cost of flexibility. I do agree that stability is a good thing, but I also argue that to move forward you must make the trade-off between this and flexibility.<br>Recently I had to retract the use of a tool that generated code for our project. This particular CLI became a fantastic accelerator to our team, but unfortunately, the CLI is retired because the company behind it decided to move to a SaaS model. That model did not fit our client&#x2019;s needs.</p><p>Back to when I added this to our project, I also had this fear that the tool was still in milestone phase, but I had to make the trade-off between stability and making our team go fast in a critical moment.<br>While now I have to look for a replacement for this tool.</p><p>To me, the fact that it allowed us to deliver more quickly on our project at a critical moment, says that it&#x2019;s worth the trade-off while it lasted.</p><p>The problem is that if you focus only on stability and error prevention, acceleration and velocity are not something you give enough credit. Do you remember when DevOps did not exist and now everyone thinks they are the best thing in the world? We as industry realized that DevOps teams are accelerators; they may not be productive in the sense that they do not deliver the product per se, but they do make things happen faster.</p><p>Now, if you compare these build hero and DevOps you can see that the first ones have stability as a priority and the later are all about moving forward faster but with the trade-offs in mind so you can get the best of both worlds.</p><h2 id="values-function values() { [native code] }1">Values</h2><p>In chapter four of &#x201C;Extreme Programming Explained,&#x201D; Kent Beck covers the following values: communication, simplicity, feedback, and courage. I believe that If you apply those, you will regularly avoid the problems described in this text. I hopefully will write about those later.</p>]]></content:encoded></item><item><title><![CDATA[Is Your IDE Limiting Your Company Technology Choices? Or Is It Trust?]]></title><description><![CDATA[<p>So far, I have worked in two places where I have found a sad problem that I think it&#x2019;s more common than I thought: having your codebase tightly coupled to your IDE.</p><p>I remember the first time I found this, I just had my first job as a</p>]]></description><link>http://blog.eldermael.io/is-your-ide-limiting-your-company-technology-choices-or-is-it-trust/</link><guid isPermaLink="false">6341b4d50494420070c87665</guid><category><![CDATA[IDE]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Sun, 08 Oct 2017 00:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1640030104754-0a33c686c533?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDl8fHRydXN0fGVufDB8fHx8MTY2NjExNTYxOA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1640030104754-0a33c686c533?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDl8fHRydXN0fGVufDB8fHx8MTY2NjExNTYxOA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Is Your IDE Limiting Your Company Technology Choices? Or Is It Trust?"><p>So far, I have worked in two places where I have found a sad problem that I think it&#x2019;s more common than I thought: having your codebase tightly coupled to your IDE.</p><p>I remember the first time I found this, I just had my first job as a tech lead for a team of Java developers. I remember that after the first day that mostly was paperwork, they showed me the place where I would sit and my machine. After that, someone sent me an email with the Standard Operation Procedure (SOP) to set up your environment.</p><p>Now, does having a SOP for something that I think should be trivial such as setting up your development environment triggers an alarm to you? At that time, with my experience at that point, it didn&#x2019;t. But if I remember correctly, that document was 9 to 10 pages long removing parts like the title, table of contents and version tracking (yeah, at that time nobody used wikis to version their documents!). After reading it from top to bottom I just shook my head in disbelief.</p><p>They were running an ancient version of Eclipse at the time. And they also were tied to another very old version of Maven. They also had instructions to set up a few plugins that they had in zip format to import directly because they were discontinued. I also remember that you had to put a lot of stuff in your maven settings file in order to be able to download the dependencies of our only project&#x2026; and a long list of etc.</p><p>The second time I had this problem was pretty recent, but I dare to say ten times worst. They had scripts that created the Eclipse configuration files and they saved those files to SCM. Now, of all the problems that you can imagine being the result of this, I can tell you a few:</p><ul><li>Hardcoded paths: because you have to save your IDE configuration to SCM, you have to explicitly setup paths in special directories or your whole project won&#x2019;t work.</li><li>Vendor Lock-in: The project won&#x2019;t work on other IDEs because the configuration files are tied to the version of Eclipse that they were using, and because the version was provided by a vendor of a JEE server, the plugins also generate specific configurations that are not compatible with other versions of Eclipse.</li><li>Developer images of the operating system: the project won&#x2019;t work at all if you use other than the specific image that they provide to developers because they need special directory structures for the project to work and special network drives setup for this.</li></ul><p>Imagine working in such a project? I can tell you that it took me at least one week to have the applications working locally. Imagine the mental overhead that a build system like this is. From time to time I still get emails from other software developers asking how I managed to have my local environment working.</p><p>The consequences of this are that in both projects were stuck in this situation and if you wanted to upgrade, let&#x2019;s say, your Java version, you were stuck to what the IDE supports. Quoting one of my coworkers, it&#x2019;s a sad situation that your entire team or organization is limited to what your current IDE can do.</p><p>The problem with this kind of mistakes is that they force you to take more bad decisions. I can tell you that those developer images are the symptom of a deeply broken build process.</p><p>In my opinion, if you cannot build your entire application and run it with a single command I dare to say that you have problems.</p><h2 id="the-underlying-problem">The Underlying Problem</h2><p>An IDE/Text editor is something personal, we as programmers do care about efficiency and our text editors or IDEs are personal choices that allow us to become more productive in our work. Some of us pick one because we see it as a bragging right.</p><p>To the point of this article, in both these projects, someone decided that developers should use the same IDE all the time and dictated what they should do every step of the build process. This may look a logical choice for many reasons such as consistency but the underlying problem is about trust. If you do not trust your programmers to take trivial decisions such as use their own IDE I would dare to say that your organization has problems beyond these technical details.</p><p>Both projects had technical leaders that came to the wrong conclusions (that everyone should use the same IDE) with the proper reasoning (that consistency is good). The problem here is that their process focuses on preventing errors because that sounds so good when you think about it. But this kind of process &#x201C;<a href="https://www.slideshare.net/stevenpappas3/netflix-organizational-culture?ref=blog.eldermael.io" rel="noopener ugc nofollow">cripples innovation because it tries to prevent small mistakes</a>&#x201D; and it becomes a burden that does not prevent or even generates big mistakes (such as having a complex build process).</p>]]></content:encoded></item><item><title><![CDATA[A Working Ceylon + Spring Boot Environment in macOS]]></title><description><![CDATA[<p>The first pain you feel when starting a new project, of course, is setting up your environment. In my case, I also had to learn a new operating system because I never had an Apple computer before.</p><p>Now, many things change between Linux and macOS but I still had a</p>]]></description><link>http://blog.eldermael.io/a-working-ceylon-spring-boot-environment-in-macos/</link><guid isPermaLink="false">6341b5270494420070c87672</guid><category><![CDATA[Working Ceylon]]></category><category><![CDATA[Spring Boot]]></category><category><![CDATA[macOS]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Sat, 12 Nov 2016 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>The first pain you feel when starting a new project, of course, is setting up your environment. In my case, I also had to learn a new operating system because I never had an Apple computer before.</p><p>Now, many things change between Linux and macOS but I still had a lot of things to learn before setting up a Ceylon + Spring Boot environment correctly. In the following paragraphs, I will explain you what I did and I hope you can reproduce it on your own to work with these two awesome technologies.</p><h1 id="iterm2-replacement-terminal-for-macos">iTerm2, replacement terminal for macOS</h1><p>By recommendation I started downloading <a href="https://iterm2.com/?ref=blog.eldermael.io" rel="noopener ugc nofollow">iTerm2</a>. The installation is very straightforward: click in the download button to get the zip file, then double click it in your Finder to extract the content, a single .app file (that is a &#x201C;Package bundle&#x201D;) and execute it.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/770/1*ib8-yv0EB-mOy_5zjpC6ag.png" class="kg-image" alt loading="lazy" width="700" height="392"></figure><p>One good thing that I found from iTerm2 is the so called &#x201C;Unyxness&#x201D; i.e. it has a lot of shortcuts to avoid using my mouse while in the console. I also enjoyed the <a href="https://material.google.com/?ref=blog.eldermael.io" rel="noopener ugc nofollow">Material Design</a> theme that matches my other application themes (I will talk about those later).</p><h1 id="homebrew-because-you-need-a-package-manager">Homebrew, because you need a package manager</h1><p></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://medium.com/media/1ce14a68e26d5cef3f4a6132e146f219?ref=blog.eldermael.io"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Marcos Besteiro on Twitter &#x2013; Medium</div><div class="kg-bookmark-description">What&#x2019;s bower?&#x201D; &#x201C;A package manager, install it with npm.&#x201D; &#x201C;What&#x2019;s npm?&#x201D; &#x201C;A package manager, you can install it with brew&#x201D; &#x201C;What&#x2019;s brew?&#x201D; ...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://medium.com/favicon.ico" alt></div></div></a></figure><p>Having a good terminal emulator will allow you to install <a href="http://brew.sh/?ref=blog.eldermael.io" rel="noopener ugc nofollow">Homebrew</a>. Homebrew is a package manager for macOS that will allow you to install third party packages from the console. Among the packages needed will be Java but I will talk about that later.</p><p>To install Homebrew just insert this command in iTerm2 and follow the instructions:</p><pre><code class="language-shell">$ /usr/bin/ruby -e &quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)&quot;</code></pre><p>Now, Homebrew has a plethora of software that you can install but first you should check that everything is running fine and you have the package definitions up to date. You can do that with the following commands:</p><pre><code class="language-shell">$ brew update
$ brew doctor</code></pre><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/770/1*uu3Hg-7ljegiqPwGwhuXAg.png" class="kg-image" alt loading="lazy" width="700" height="386"></figure><p>Once you see if everything is working fine, you can continue with installing Java.</p><h1 id="tooling-needed-java-8-sdkman-ceylon-and-gradle">Tooling Needed: Java 8, SDKMAN!, Ceylon and Gradle</h1><p>To install Java 8 you need to run the following command:</p><pre><code class="language-shell">$ brew cask install java</code></pre><p>And to make sure that you have it up and running just run a command to get the version you are running currently:</p><pre><code class="language-shell">$ java -version
java version &quot;1.8.0_112&quot;Java(TM) SE Runtime Environment (build 1.8.0_112-b16)Java HotSpot(TM) 64-Bit Server VM (build 25.112-b16, mixed mode)</code></pre><p>Now, before you install Ceylon you will need to install yet another tool called <a href="http://sdkman.io/?ref=blog.eldermael.io" rel="noopener ugc nofollow">SDKMAN!.</a> It will allow to install and manage different versions of Ceylon (among other things!).</p><p>The setup is pretty straightforward, just type this in your terminal and follow the instructions:</p><pre><code class="language-shell">$ curl -s &quot;https://get.sdkman.io&quot; | bash</code></pre><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/770/1*qagrtTKGhZxHcjLY_xAmpA.png" class="kg-image" alt loading="lazy" width="700" height="471"></figure><p>You will have to close the current terminal session and then you will have SDKMAN! working properly.</p><p>To know that you have it working correctly, check the list of candidates that you can install, among those you will see Ceylon:</p><pre><code class="language-shell">$ sdk list</code></pre><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/770/1*fNw_SDBBZ8Dm4s-Ct7NDFg.png" class="kg-image" alt loading="lazy" width="700" height="474"></figure><p>As you can see in the image, you have a list of candidates that you can install and SDKMAN! will show you the way to install those and the latest version. In this case, to install Ceylon, you have to type the following command:</p><pre><code class="language-shell">$ sdk install ceylon</code></pre><p>If you have to install a specific version of any candidate, just add the version as another parameter to the command e.g. <code>sdk install ceylon 1.2.2</code> . To see the list of available versions you can type <code>sdk list ceylon</code></p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/770/1*dPLsj8xzHMmgQBXXTXK3Yg.png" class="kg-image" alt loading="lazy" width="700" height="488"></figure><p>How to check that Ceylon is working fine? Just check the current version:</p><pre><code class="language-shell">$ ceylon --versionceylon version 1.3.0 49375fd (Total Internal Reflection)</code></pre><p>Now that you have Ceylon, you will want to install Gradle (I will tell you why in the next section) using SDKMAN!. You also want to check that everything works as expected.</p><pre><code class="language-shell">$ sdk install gradle...$ gradle --version------------------------------------------------------------Gradle 3.1------------------------------------------------------------Build time:   2016-09-19 10:53:53 UTCRevision:     13f38ba699afd86d7cdc4ed8fd7dd3960c0b1f97Groovy:       2.4.7Ant:          Apache Ant(TM) version 1.9.6 compiled on June 29 2015JVM:          1.8.0_112 (Oracle Corporation 25.112-b16)OS:           Mac OS X 10.12.1 x86_64</code></pre><h1 id="gradle-ceylon-and-managing-transitive-depedencies">Gradle, Ceylon and Managing Transitive Depedencies</h1><p>Let me explain something here. I have been working in two Ceylon projects so far. One of those projects is very small and does not depend much on external libraries. On the other hand, I have an application that started just like that and I had to add different features until I finally decided to add Spring to it.</p><p>Because Ceylon modules do not handle transitive dependencies, I had to create a module descriptor with all the dependencies needed. It looked like this:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://medium.com/media/73127e0f733265390b9f9b112fcc3286?ref=blog.eldermael.io"><div class="kg-bookmark-content"><div class="kg-bookmark-title">module.ceylon &#x2013; Medium</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://medium.com/favicon.ico" alt></div></div></a></figure><p>As you can see, managing all the dependencies like this will be a burden if you start adding more libraries/modules like I had to. This is why you need Gradle and Renato Athaydes&#x2019; Ceylon plugin for Gradle.</p><h2 id="renatoathaydesceylon-gradle-plugin"><strong><strong>renatoathaydes/ceylon-gradle-plugin</strong></strong></h2><h3 id="ceylon-gradle-plugina-simple-gradle-plugin-to-manage-ceylon-projects">ceylon-gradle-plugin - A simple Gradle plugin to manage Ceylon projects.</h3><p>github.com</p><p>What this plugin will do is to manage your module transitive dependencies by creating an <code>overrides.xml</code> file that will tell Ceylon that you will also need more dependencies than the ones declared in your module (to know how this file works, refer to the Ceylon help located <a href="https://ceylon-lang.org/documentation/1.3/reference/repository/overrides/?ref=blog.eldermael.io" rel="noopener ugc nofollow">here</a>).</p><h1 id="creating-a-ceylon-project-and-building-with-gradle">Creating a Ceylon Project and Building with Gradle</h1><p>To create a Ceylon project you can use the command line to create the files for a new Ceylon module. You only have to type the following command and enter the required parameters so the Ceylon command line tool can generate the project structure for you.</p><pre><code class="language-shell">$ ceylon new simple</code></pre><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/770/1*FFHy83IqASQW6fci_JACMw.png" class="kg-image" alt loading="lazy" width="700" height="380"></figure><p>And this is where Gradle comes into play. First you will need to create a file named <code>build.gradle</code> and provide the contents needed to work with your module so the Ceylon plugin can create the overrides.xml file.</p><p>Here is how my <code>build.gradle</code> file looks like for the module I created with the command line:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://medium.com/media/94519a1ed66e165c8802e2c993ad00b6?ref=blog.eldermael.io"><div class="kg-bookmark-content"><div class="kg-bookmark-title">build.gradle &#x2013; Medium</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://medium.com/favicon.ico" alt></div></div></a></figure><p>This is pretty simple, you just declare the plugin for Gradle to use. Afterwards you need to make sure that the ceylon groovy closure contains the module that you specified in the command line and finally you provide the repository from which the dependencies will be retrieved (in this case, jcenter).</p><blockquote><em><em>NOTE: The dependency block is to exclude logback from your dependencies as it is already contained within Ceylon. If you do not exclude this, you will get an error when trying to run the spring boot application.</em></em></blockquote><p>For Spring Boot, you will need to declare only the starter poms as dependencies and then the plugin will take care of the rest. Here is how my <code>modules.ceylon</code> and run.ceylon files look like:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://medium.com/media/54901c686ed52bbc4d1e10f03428c565?ref=blog.eldermael.io"><div class="kg-bookmark-content"><div class="kg-bookmark-title">module.ceylon &#x2013; Medium</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://medium.com/favicon.ico" alt></div></div></a></figure><p>With this project structure, you have all you need to let Gradle build and run your application. Execute the following command:</p><p>The first time you can see that Gradle will download all the direct and transitive dependencies for your project and finally it will run our application embedded server listening to the port 8080.</p><pre><code class="language-shell">$ gradle runCeylon&apos;:resolveCeylonDependencies:createDependenciesPoms UP-TO-DATE:createMavenRepo UP-TO-DATE:generateOverridesFile UP-TO-DATE:createModuleDescriptors UP-TO-DATE:importJars UP-TO-DATE:compileCeylon UP-TO-DATE:runCeylon.   ____          _            __ _ _ /\\ / ___&apos;_ __ _ _(_)_ __  __ _ \ \ \ \( ( )\___ | &apos;_ | &apos;_| | &apos;_ \/ _` | \ \ \ \ \\/  ___)| |_)| | | | | || (_| |  ) ) ) )  &apos;  |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot ::        (v1.3.0.RELEASE)1012 [main] INFO io.eldermael.ceylon.boot.run_ - Starting run_ on twmac.local with PID 70615 ...3628 [main] INFO org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer - Tomcat started on port(s): 8080 (http)3634 [main] INFO io.eldermael.ceylon.boot.run_ - Started run_ in 2.99 seconds (JVM running for 4.104)&gt; Building 87% &gt; :runCeylon</code></pre><p>Now, you can always stop the execution by pressing Ctrl + C but unfortunately this won&#x2019;t stop the servlet container thus you will need to kill the process using the PID manually, e.g.</p><pre><code class="language-shell">$ kill -9 70615</code></pre><h1 id="import-to-intellij-idea">Import to Intellij IDEA</h1><p>Because I do not expect you to edit more files in the command line with nano (worst, Vim), you will need to download Intellij Idea and get the Ceylon plugin that will help you to edit Ceylon files with nice features such as autocompletion and refactoring.</p><p>First, you need to download any version of Intellij Idea (I use the community edition):</p><h2 id="intellij-idea-the-java-ide"><strong><strong>IntelliJ IDEA the Java IDE</strong></strong></h2><h3 id="code-centric-ide-focused-on-your-productivity-full-java-ee-support-deep-code-understanding-best-debugger%E2%80%A6">Code-centric IDE, focused on your productivity. Full Java EE support, deep code understanding, best debugger&#x2026;</h3><p></p><p><a href="www.jetbrains.com">www.jetbrains.com</a></p><p>Once you have installed it by double clicking in the file, you have to run the IDE for the first time. What you need to do to get the Ceylon plugin is to select the plugin option in the Configure menu at the welcome screeen.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/max/770/1*7abcLMc1OlCF7t_jvU1usA.png" class="kg-image" alt loading="lazy" width="700" height="537"><figcaption>Select the Plugins option to install the Ceylon IDE plugin.</figcaption></figure><p>Afterwards, in the next dialog, you have to click in the Browse repositories button and it will show you a dialog box where you can search for the Ceylon IDE plugin and install it.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/max/770/1*PMixY7GucyUQJqLkWGxRCg.png" class="kg-image" alt loading="lazy" width="700" height="573"><figcaption>Search for the &#x201C;Ceylon IDE&#x201D; plugin</figcaption></figure><p>Once you have downloaded the plugin and restarted the IDE you can now import the project we created beforehand.</p><p>Note that it is important that you &#x201C;import&#x201D; import the project from the Welcome screen instead of selecting &#x201C;Open&#x201D; or &#x201C;Create New Project&#x201D; otherwise the project dependencies and autocompletion won&#x2019;t work (in my experience).</p><p>To import the project you need to select the root folder as shown in the image.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/770/1*rW0ttFojcFcGd78YTZfbxw.png" class="kg-image" alt loading="lazy" width="700" height="915"></figure><p>Then you will be prompted with a dialog to import the project and you have to select the option &#x201C;Import project from existing sources&#x201D;. Following that you will have a screen where most of the options by default are OK unless you want to customize how the gradle project will be imported (I recommend to use the default gradle wrapper here).</p><p>Also you will be prompted to select the options for the &#x201C;Project structure&#x201D; and thus the Ceylon compiler options. You will se a tab called &#x201C;Repositores&#x201D;. There you have to select the overrides.xml file that Gradle creates in the building process so Ceylon can import the libraries that you specify and detect the transitory dependencies. You also have to check both options &#x201C;Use a flat classpath&#x201D; (unless you have specified otherwise in the Gradle plugin) and &#x201C;Automatically export Maven dependencies&#x201D;. This is very important, this way the IDE can autocomplete</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/770/1*8R5ouqKFWOBjlrkbRSs0QQ.png" class="kg-image" alt loading="lazy" width="700" height="579"></figure><p>You will know that your project is correctly imported once you can see that the transitive dependencies are shown in &#x201C;External libraries&#x201D; node of the project toolbar.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/695/1*QDjxMbmGdOu7F9x0d-pqsA.png" class="kg-image" alt loading="lazy" width="632" height="1206"></figure><p>Unfortunately at this point, Gradle will crash if you do not have tests. Until you write tests your only option will be to run every Gradle command skipping tests using the option <code>-x test</code> e.g. <code>gradle runCeylon -x test</code></p><p>To see a proper way to test your Ceylon modules, you can follow <a href="https://ceylon-lang.org/blog/2016/02/22/ceylon-test-new-and-noteworthy/?ref=blog.eldermael.io" rel="noopener ugc nofollow">this guide</a>. The recommendation is that you create a module prefixed with <code>test</code> that will contain the tests of the module with production code. E.g., if you have a module named <code>foo.bar</code> then your test module will be named <code>test.foo.bar</code></p><h1 id="wrapping-up">Wrapping up</h1><p>I document this because it was a lengthy back and forth to setup this environment but it was worth as now I have all what I need to continue with this project. I hope this will be useful in the future but I am more interested in the future Spring Boot integration promised by Gavin King.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://medium.com/media/4ff7f6b216428b553e4a4aa5af5f8a70?ref=blog.eldermael.io"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Gavin King &#x1F418; on Twitter &#x2013; Medium</div><div class="kg-bookmark-description">Let&#x2019;s see if we have time to slip #springboot support into Ceylon 1.3.1. https://t.co/JqmDLEEkTe</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://medium.com/favicon.ico" alt></div></div></a></figure>]]></content:encoded></item><item><title><![CDATA[A (Very) Tiny HTTP Server With Ceylon]]></title><description><![CDATA[<p>Finally <a href="http://ceylon-lang.org/?ref=blog.eldermael.io" rel="noopener ugc nofollow">Ceylon 1.2</a> has been released and I am pleased to start coding more and more with it. If you have not take a look at Ceylon, let me tell you that it is one of best designed programming language that I have seen. The type system is simply</p>]]></description><link>http://blog.eldermael.io/a-very-tiny-http-server-with-ceylon/</link><guid isPermaLink="false">6341b5a20494420070c87686</guid><category><![CDATA[HTTP Server]]></category><category><![CDATA[Working Ceylon]]></category><dc:creator><![CDATA[eldermael]]></dc:creator><pubDate>Fri, 13 Nov 2015 00:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Finally <a href="http://ceylon-lang.org/?ref=blog.eldermael.io" rel="noopener ugc nofollow">Ceylon 1.2</a> has been released and I am pleased to start coding more and more with it. If you have not take a look at Ceylon, let me tell you that it is one of best designed programming language that I have seen. The type system is simply unmatched by any contemporaneous programming language.</p><p>Now, as a web developer on both the front end and backend I really was expecting an easy way to create a web server to return files and JSON data to create a little web application for fun and starting to learn more of the language. And also use Ceylon on the client side because it is not just targeting the JVM in the backend but it also transpiles to JavaScript.</p><p>Fortunately, Ceylon 1.2 comes with different modules and one of them is ceylon.net.http.server which uses JBoss Undertow behind the scenes. For now I was trying to make a quick program to return a single string by HTTP, afterwards I will build on top of this to create what I want.</p><h2 id="the-server-and-the-endpoints-first-glance-at-the-code">The Server And The Endpoints, First Glance At The Code</h2><p>Code is king, so let&#x2019;s start, here is a three file gist of a HTTP Server with only one endpoint:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://medium.com/media/bcf8a3e1d6074fd55effade8277b98f7?ref=blog.eldermael.io"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Medium</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://medium.com/favicon.ico" alt></div></div></a></figure><p>If this code looks familiar to you, it is because it is a version of the current code snippet that is shown on the official Ceylon web site. But let&#x2019;s analyze it:</p><p>The first two files are self explanatory, one is a Ceylon module definition. Here you can see that our module will import the module ceylon.net to the specific version.The second file is a Ceylon package descriptor telling that our package will be shared i.e. visible to other modules if they import our own</p><h2 id="how-to-create-a-server">How to create a server</h2><p>Next is the juicy code, here you can see various lines that do the following:</p><p>First, we import the factory function newServer which will help us to create a HTTP Server that is defined by the interface ceylon.net.http.server.Server. This interface, as you expect it, handles the lifecycle of our HTTP server and allow us to define Endpoints.</p><p>Now, all this function needs is a stream (list, sort of) of ceylon.net.http.server.Endpoint instances that will define what your server will respond to.</p><h2 id="endpoints">Endpoints</h2><p>Now, to create an Endpoint you need three parameters:</p><ul><li>A <em><em>path</em></em> defined by a ceylon.net.http.server.Matcher, i.e. the URIs that the endpoint will handle. For simplicity I am using the factory function called equals that will return me a Matcher that will make the endpoint respond to only URIs that match exactly the string I am using. You can compose these Matchers for greater flexibility.</li><li>An <em><em>acceptMethod</em></em> that is an stream of ceylon.net.http.Method instances. These translate into HTTP verbs, and as you can see I only needed a single instance for the verb GET.</li><li>A <em><em>service function</em></em>, this will be the code that will be executed when a request to our server is matched by the path we specified and the verb(s) that we passed in the parameter acceptMethod. It takes as arguments a ceylon.net.http.server.Request and ceylon.net.http.server.Reponse. These interfaces allow you to obtain data from and send data to the client respectively. This through HTTP specific things such as parameters, cookies, headers, etc. I will talk about these later.</li></ul><h2 id="it%E2%80%99s-running">It&#x2019;s Running!</h2><p>Now, if you execute the code (I am using Ceylon IDE to execute the run function) you will have a running HTTP server that will listen to requests to <a href="http://localhost:8080/?ref=blog.eldermael.io" rel="noopener ugc nofollow">http://localhost:8080/</a>. That was easy!</p><p>Now, talking about the language itself, I want you to take note of the named parameters that we are using all over the place (endpoints, path, acceptMethod, service). They make our code very explicit and readable without the caveats of other languages like passing objects/hashes of options that will not be checked by the compiler (here it does!).</p><p>Also note that the endpoints parameter in the function newServer: it is an iterable. This means that if you add a comma after the Endpoint instance that we are creating, you can add more Endpoints.</p><h2 id="what-i-will-do-next">What I will do next</h2><p>Currently I am working to make this server respond files, afterwards I really need to start working to respond with JSON and finally I will try to use Ceylon on the browser putting it all together (remember, Ceylon does transpiles to JavaScript). I hope you can read the coming posts about this.</p><h2 id="notes-for-the-experienced-java-programmer">Notes for the experienced Java programmer</h2><p>Yes, I almost can see the grin in your face while reading the code. If you are like me and you have been using Java and the Servlet API for a long time, I think you get the idea behind the ceylon.net.http.server API. And if you compare this code, you can see that it allows you to define things very quickly and concisely with no unneeded verbosity. This somehow reminds me of Spring Boot but here you use less annotations and more code.</p>]]></content:encoded></item></channel></rss>