<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Emad Alashi</title>
	<atom:link href="https://www.emadashi.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.emadashi.com/</link>
	<description></description>
	<lastBuildDate>Mon, 21 Apr 2025 09:21:30 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.4.8</generator>
	<item>
		<title>Simulate Azure IoT Telemetry And Device Twin using MQTT Directly</title>
		<link>https://www.emadashi.com/2025/04/simulate-azure-iot-telemetry-and-device-twin-using-mqtt-directly/</link>
					<comments>https://www.emadashi.com/2025/04/simulate-azure-iot-telemetry-and-device-twin-using-mqtt-directly/#respond</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Mon, 21 Apr 2025 09:21:28 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">https://www.emadashi.com/?p=836</guid>

					<description><![CDATA[<p>TLDR; In this post I show how to simulate Azure IoT device D2C (Device to cloud) messages telemetry, and Device Twin Reported Properties using MQTTX CLI. Introduction I have been working in several Azure IoT projects in the past 3 years, where MQTT was the preferred device/server communication protocol. In these projects, there has been… <span class="read-more"><a href="https://www.emadashi.com/2025/04/simulate-azure-iot-telemetry-and-device-twin-using-mqtt-directly/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2025/04/simulate-azure-iot-telemetry-and-device-twin-using-mqtt-directly/">Simulate Azure IoT Telemetry And Device Twin using MQTT Directly</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><strong>TLDR</strong>;</p>



<p>In this post I show how to simulate Azure IoT device D2C (Device to cloud) messages telemetry, and Device Twin Reported Properties using MQTTX CLI.</p>



<h2 class="wp-block-heading">Introduction</h2>



<p>I have been working in several Azure IoT projects in the past 3 years, where MQTT was the preferred device/server communication protocol. In these projects, there has been a repeatable need to generate MQTT messages to simulate devices&#8217; telemetry, without going through the Azure CLI.</p>



<p>One of the tools that I found recently was <a href="https://mqttx.app/docs/cli">MQTTX CLI</a>, which is a light-weight client that can subscribe to and publish topics to MQTT broker, and has a solid and dynamic templating functionality based on JavaScript that makes generating telemetry easy.<br>Quoting from their website:</p>



<p>&#8220;<em><a class="" href="https://mqttx.app/zh/cli">MQTTX CLI</a> is an open source MQTT 5.0 CLI Client and MQTTX on the command line. Designed to help develop and debug MQTT services and applications faster without the need to use a graphical interface.</em>&#8220;</p>



<p>Let&#8217;s have a look at the Simulate feature.</p>



<h2 class="wp-block-heading">MQTTX CLI Simulate Feature</h2>



<p>The <a href="https://mqttx.app/docs/cli/get-started#simulate">Simulate feature</a> can publish messages with dynamic content generated based on your template file.  The feature uses the <code><strong>simulate</strong></code> command. Let&#8217;s examine the parameters to.</p>



<p><strong><em>Note: behind the scenes, the Simulate command depends on the Publish command, so make sure to check the Publish parameters necessary, e.g. hostname, username, password..etc</em></strong></p>



<p>In addition to the basic parameters to publish messages, the <code>Simulate</code>s command has the following parameters to control the simulation:</p>



<ul>
<li><strong>&#8211;count</strong>: Number of connections: represents the number of MQTT connections that will be created in the process. </li>



<li><strong>&#8211;interval-message</strong>: interval of publishing message to the broker (default: 1000ms)</li>



<li><strong>&#8211;limit</strong>: the number of messages to publish, 0 means unlimited (default: 0)</li>



<li><strong>&#8211;client-id</strong>: the client id, support %i (index) variable. Check the <code>--<strong>count</strong></code> parameter above to see how it is used.</li>



<li><strong>&#8211;topic</strong>: the message topic. If your topic will depend on the User, Client ID, or the count of simulated messages, then you can use %u (username), %c (client id), %i (index) variables in the value.</li>



<li><strong>&#8211;file</strong>: the JavaScript Module file that will be used to generate the payload. This JavaScript has a specific contract that you should adhere to so that it can be used to MQTTX CLI to generate the payload.</li>
</ul>



<p><strong>Note: that all the connections will be established by the same client id unless you pass &#8220;%i&#8221;, if you do then each connection will have its unique generated client id based on the count. e.g. clientid_1, clientid_2&#8230;etc</strong>.</p>



<p>Let&#8217;s have a look at the example template file in the docs, this simple simulator generates a simple payload with &#8220;temp&#8221; and &#8220;hum&#8221; properties. (in order to use it you pass the name of the file to the <code>--file</code> parameter).</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
/**
 * MQTTX Scenario file example
 *
 * This script generates random temperature and humidity data.
 */
function generator(faker, options) {
  return {
    // If no topic is returned, use the topic in the command line parameters.
    // Topic format: &#039;mqttx/simulate/myScenario/&#039; + clientId,
    message: JSON.stringify({
      temp: faker.number.int({ min: 20, max: 80 }), // Generate a random temperature between 20 and 80.
      hum: faker.number.int({ min: 40, max: 90 }), // Generate a random humidity between 40 and 90.
    })
  }
}
// Export the scenario module
module.exports = {
  name: &#039;myScenario&#039;, // Name of the scenario
  generator, // Generator function
}
</pre></div>


<p>You can see that the <code><strong>generator</strong></code> function is the contract that will be used by the tool to generate the simulated payload. It takes two parameters: <code><strong>faker</strong></code> that you can use to generate random and fake values, and <code><strong>options</strong></code> if you want use any of the passed options to the command.</p>



<p>All what you need to do in your own file is to implement the generator function with your custom logic to generate the payload. For more inspiration you can look at some of the <a href="https://github.com/emqx/MQTTX/tree/main/cli/src/scenarios">built-in simulation files</a>.</p>



<h3 class="wp-block-heading">Simulate Azure D2C Message</h3>



<p>Now comes the most important part, sending simulated telemetry to Azure IoT Hub. Microsoft already provided <a href="https://github.com/Azure-Samples/IoTMQTTSample/tree/master/mosquitto_pub">some code samples to use MQTT</a> directly to send messages. The highlight is that we need to send a message to the topic <code><strong>devices/{device_id}/messages/events/</strong></code></p>



<p>We need to send a message with the following parameters (<strong>make sure to replace the placeholder tokens with yours</strong>):</p>



<ul>
<li><strong>Host</strong>: &#8220;{iothub_name}.azure-devices.net&#8221;</li>



<li><strong>Port</strong>: 8883</li>



<li><strong>Client Id</strong>: &#8220;{device_id}&#8221;</li>



<li><strong>User</strong>: &#8220;{iothub_name}.azure-devices.net/{device_id}/?api-version=2018-06-30&#8221;</li>



<li><strong>Password:</strong> &#8220;{sas_token}&#8221; (this is generated by <a href="https://learn.microsoft.com/en-us/cli/azure/iot/hub?view=azure-cli-latest#az-iot-hub-generate-sas-token">this command</a>, choose a duration value that suits the lifetime of your test).</li>



<li><strong>Topic</strong>: &#8220;devices/{device_id}/messages/events/&#8221;</li>



<li><strong>Message</strong>: The payload of your telemetry</li>



<li><strong>CA certificate</strong>: if you want to validate the IoT Hub&#8217;s certificate, then you can provide the certificate that can be <a href="https://github.com/Azure-Samples/IoTMQTTSample/blob/master/IoTHubRootCA.crt.pem">found here</a>. Otherwise you can pass the <code><strong>--insecure</strong></code> parameter to ignore validating the certificate.</li>



<li><strong>MQTT Version</strong>: this is necessary with MQTTX CLI as its default protocol version is 5.0, while Azure IoT Hub&#8217;s one is 3.1.1.</li>
</ul>



<p>Assuming that we have a device called &#8220;emad&#8221;, and an Azure IoT Hub called &#8220;youriothub&#8221;, and a template file called &#8220;simulatedTelemetry.js&#8221;, then the final command should look like this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
mqttx simulate \
  --file simulatedTelemetry.js \
  -c 10 \
  --interval-message 1000 \
  --insecure \
  -q 1 \
  -V 3.1.1 \
  -h &quot;youriothub.azure-devices.net&quot; \
  -t &quot;devices/emad/messages/events/&quot; \
  --client-id &quot;emad&quot; \
  -u &#039;youriothub.azure-devices.net/emad/?api-version=2018-06-30&#039; \
  -P &#039;long-sharedaccesstoken&#039; \
  -l mqtts
</pre></div>


<h3 class="wp-block-heading">Simulate Azure IoT Device Twin Reported Properties</h3>



<p>We can also use the same technique above for the Device Twin Reported Properties. The only difference will be the <code>topic</code> value which should be <code><strong>\$iothub/twin/PATCH/properties/reported/</strong></code>.</p>



<p>If you don&#8217;t want to use simulation to send the Reported Properties and just send it once with a specific payload, you can use the <code><strong>pub</strong></code> command with a simple JSON payload. Like the following:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
mqttx pub \
  --file-read devicePropertyPayload.json \
  --insecure \
  -q 1 \
  -V 3.1.1 \
  -h &quot;youriothub.azure-devices.net&quot; \
  -t &quot;devices/emad/messages/events/&quot; \
  --client-id &quot;emad&quot; \
  -u &#039;youriothub.azure-devices.net/emad/?api-version=2018-06-30&#039; \
  -P &#039;long-sharedaccesstoken&#039; \
  -l mqtts

</pre></div>


<h2 class="wp-block-heading">Conclusion</h2>



<p>By this, I hope you can use MQTTX CLI to simulate device telemetry and Device Twins Reported Properties.</p>
<p>The post <a href="https://www.emadashi.com/2025/04/simulate-azure-iot-telemetry-and-device-twin-using-mqtt-directly/">Simulate Azure IoT Telemetry And Device Twin using MQTT Directly</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2025/04/simulate-azure-iot-telemetry-and-device-twin-using-mqtt-directly/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>M5 Atom Lite, Home Assistant, ESPHome, and Capacitive Soil Sensor</title>
		<link>https://www.emadashi.com/2021/01/m5-atom-lite-home-assistant-esphome-and-capacitive-soil-sensor/</link>
					<comments>https://www.emadashi.com/2021/01/m5-atom-lite-home-assistant-esphome-and-capacitive-soil-sensor/#comments</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Sun, 31 Jan 2021 01:24:01 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=788</guid>

					<description><![CDATA[<p>Summary Update: Tatham had a great thread of tweets about this posts where he filled several gaps in this post, including the need of the MQTT server. Check it out here: https://twitter.com/TathamOddie/status/1357904732027637760?s=20 In this post, I describe what I did to set up an M5 Atom Lite with a capacitive soil moisture sensor, configure it… <span class="read-more"><a href="https://www.emadashi.com/2021/01/m5-atom-lite-home-assistant-esphome-and-capacitive-soil-sensor/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2021/01/m5-atom-lite-home-assistant-esphome-and-capacitive-soil-sensor/">M5 Atom Lite, Home Assistant, ESPHome, and Capacitive Soil Sensor</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><span data-preserver-spaces="true">Summary</span></h2>
<p><strong>Update</strong>: <em>Tatham had a great thread of tweets about this posts where he filled several gaps in this post, including the need of the MQTT server. Check it out here: <a href="https://twitter.com/TathamOddie/status/1357904732027637760?s=20">https://twitter.com/TathamOddie/status/1357904732027637760?s=20</a></em></p>
<p><span data-preserver-spaces="true">In this post, I describe what I did to set up an <a href="https://docs.m5stack.com/#/en/core/atom_lite">M5 Atom Lite</a> with a <a href="https://duckduckgo.com/?t=ffab&amp;q=capactive+soil+moisture+sensor&amp;atb=v188-1&amp;ia=web">capacitive soil moisture sensor</a>, configure it with <a href="http://esphome.io/">ESPHome</a> and connect it to <a href="https://www.home-assistant.io/">Home Assistant</a>. The Home Assistant is deployed in a Docker container on my <a href="https://www.synology.com/en-us/products/DS220+">Synology NAS (DS220+)</a>, and my dev machine is a MacbookPro.</span></p>
<p><strong><span data-preserver-spaces="true">Important</span></strong><span data-preserver-spaces="true">: In this post, I rely on </span><a class="editor-rtfLink" href="https://www.youtube.com/watch?feature=youtu.be&amp;v=lc5cv6X7bXk&amp;app=desktop" target="_blank" rel="noopener"><span data-preserver-spaces="true">Tatham Oddie&#8217;s in-valuable video</span></a><span data-preserver-spaces="true">, which is a comprehensive guide and introduction to HomeAssistant, ESPHome, and M5 Atom Lite.</span></p>
<p><span data-preserver-spaces="true">So I highly encourage you to check that video before you continue.</span></p>
<p><strong>Disclaimer:</strong> I am not expert in IoT or electricity, please consider this context with this post!</p>
<p><iframe src="https://www.youtube.com/embed/fdt5dMDmiHE" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>&nbsp;</p>
<h2><span data-preserver-spaces="true">Terminology and References</span></h2>
<p><span data-preserver-spaces="true">To understand the rest of the article, this is a quick run-through of the main terms:</span></p>
<ul>
<li><a class="editor-rtfLink" href="https://docs.m5stack.com/#/en/core/atom_lite" target="_blank" rel="noopener"><span data-preserver-spaces="true">M5 Atom Lite</span></a><span data-preserver-spaces="true">: an ESP32 device that is packaged nicely. You can program it with Arduino Framework (C++), or MicroPython.</span></li>
<li><a class="editor-rtfLink" href="https://duckduckgo.com/?t=ffab&amp;q=capactive+soil+moisture+sensor&amp;atb=v188-1&amp;ia=web" target="_blank" rel="noopener"><span data-preserver-spaces="true">Capacitive Soil Moisture Sensor</span></a><span data-preserver-spaces="true">: measures the moisture of the soil, and produces the readings as analog stream.</span></li>
<li><a class="editor-rtfLink" href="https://www.home-assistant.io/" target="_blank" rel="noopener"><span data-preserver-spaces="true">Home Assistant</span></a><span data-preserver-spaces="true">: A home automation server/tool. Connects to all the home-assistant-ready devices and presents a web dashboard where you can read and control these devices.</span></li>
<li><a class="editor-rtfLink" href="https://esphome.io/" target="_blank" rel="noopener"><span data-preserver-spaces="true">ESPHome</span></a><span data-preserver-spaces="true">: a project through which you can program your ESP device and make it home-assistant-ready.</span></li>
</ul>
<h2><span data-preserver-spaces="true">Setting up Home Assistant with Docker</span></h2>
<p><span data-preserver-spaces="true">My </span><span data-preserver-spaces="true">Synology NAS DS220+</span><span data-preserver-spaces="true"> was a perfect candidate to host the Home Assistant server, </span><a class="editor-rtfLink" href="https://mariushosting.com/how-to-install-home-assistant-on-your-synology-nas/" target="_blank" rel="noopener"><span data-preserver-spaces="true">this is a guide</span></a><span data-preserver-spaces="true"> that will tell you how to set it up with <a href="https://docker.com">Docker</a>.<br />
</span></p>
<p><span data-preserver-spaces="true">However, there is a small problem: if you checked Tatham&#8217;s video above, you will see that he added ESPHome to Home Assistant as an add-on. The problem is that the Supervisor menu item, through which you can add add-ons to Home Assistant, doesn&#8217;t exist in the Docker image of Home Assistant.</span></p>
<p><span data-preserver-spaces="true">To solve this problem, I had to run an independent ESPHome instance in another Docker container to program my device.</span></p>
<p><span data-preserver-spaces="true">To do that, I took the same steps in the guide I mentioned above for Home Assistant. The only difference is that:</span></p>
<ol>
<li>Used the image &#8220;<a href="https://hub.docker.com/r/esphome/esphome"><span data-preserver-spaces="true">esphome/esphome:latest</span></a>&#8221; of course!</li>
<li>I mounted to a different folder under &#8220;docker&#8221; I called &#8220;esphome&#8221;</li>
<li><span data-preserver-spaces="true">And I exposed port 6052</span></li>
</ol>
<p><span data-preserver-spaces="true">Now you have an ESPHome instance running, and can create the first Node and application on the device.</span></p>
<h2><span data-preserver-spaces="true">Creating the Node on ESPHome</span></h2>
<p><span data-preserver-spaces="true">To access ESPHome dashboard, I navigate to HTTP://[nas-ip-address]:6052. And to make things easier for me, and for my password manager, I use the awesome service </span><a class="editor-rtfLink" href="https://nip.io/" target="_blank" rel="noopener"><span data-preserver-spaces="true">https://nip.io</span></a><span data-preserver-spaces="true"> to give this address a proper name like https://esphome-[nas-ip-address]:6052.</span></p>
<p><span data-preserver-spaces="true">In the ESPHome dashboard, I follow the wizard to create my Node, which is a representation of my device. In there,  I can program my device using YAML, which will eventually generate C++ code.<br />
</span></p>
<p><span data-preserver-spaces="true">The ESPHome editor already comes with a linter, so your mistakes will be corrected for you in the browser. However, if you want to have access to the generated C++ code, and examine the YAML file in VSCode, you can navigate to the code on the docker folder on your NAS.</span></p>
<p><strong><span data-preserver-spaces="true">Note</span></strong><span data-preserver-spaces="true">: <em>When you install Docker on your NAS, the &#8220;docker&#8221; folder will not be visible on the network. So you have to untick the box &#8216;Hide this shared folder in &#8220;My Network Places&#8221;&#8216;.</em></span></p>
<h2><span data-preserver-spaces="true">Generating the code</span></h2>
<p><span data-preserver-spaces="true">I started with a very basic YAML to control the LED, I hit Compile menu item in the node to generate the C++ code that will be deployed to my device. Now I am ready to install the application on my device.</span></p>
<pre class="brush: yaml; title: ; notranslate">
esphome:
  name: m5atomlite2
  platform: ESP32
  board: m5stack-core-esp32

wifi:
  ssid: &quot;wifi-ssid&quot;
  password: &quot;wifi-password&quot;

  # Enable fallback hotspot (captive portal) in case wifi connection fails
  ap:
    ssid: &quot;M5Atomlite2 Fallback Hotspot&quot;
    password: &quot;0yyQtzyIpw3z&quot;

captive_portal:

# Enable logging
logger:

# Enable Home Assistant API
api:
  password: &quot;123&quot;

ota:
  password: &quot;123&quot;

light:
  - platform: fastled_clockless
    chipset: SK6812
    pin: 27
    num_leds: 1
    rgb_order: GRB
    name: &quot;FastLED Light&quot;


</pre>
<p><strong>Note</strong><span data-preserver-spaces="true">:<em> Make sure to use the right password for your wifi, and</em> <em>make sure you pick the right configurations for your LED: the pin number on your board and the chipset number of the LED you have.<br />
</em></span></p>
<p><span data-preserver-spaces="true">When you create the Node in the ESPHome dashboard and compile your YAML file, a folder with the same name will be created in the docker/esphome folder on your NAS (that is if you have followed the steps above, otherwise use the name of the folder you mapped).</span></p>
<h2><span data-preserver-spaces="true">Flashing the device for the first time</span></h2>
<p><strong>Update</strong>: <em>Eranjo mentioned in the comments that the latest flasher version might not work, so you might need a previous version. Check his comment below for more details.</em></p>
<p><span data-preserver-spaces="true">Cccording to Tatham&#8217;s video, I need to use something like <a href="https://github.com/esphome/ESPHome-Flasher">esphome-flasher</a>, but there was no version for macOS, and I need to find an alternative.</span></p>
<p><span data-preserver-spaces="true">Tatham mentioned that there are many ways to flash an ESP device, but I consulted with my friend </span><a class="editor-rtfLink" href="https://twitter.com/amal_abey" target="_blank" rel="noopener"><span data-preserver-spaces="true">@Amal Abeygunawardana</span></a><span data-preserver-spaces="true"> and found his suggestion interesting! Use the </span><a class="editor-rtfLink" href="https://marketplace.visualstudio.com/items?itemName=platformio.platformio-ide" target="_blank" rel="noopener"><span data-preserver-spaces="true">PlatformIO IDE</span></a><span data-preserver-spaces="true"> on VSCode. This also gave me the opportunity to learn about programming my device without ESPHome, using the Arduino Framework (another story).</span></p>
<p><span data-preserver-spaces="true">Once I opened the folder using VSCode, the PlatformIO IDE extension discovered that this is a folder it understands, and it launched the PlatformIO IDE hello page.</span></p>
<p><span data-preserver-spaces="true">I made sure my device is connected to my computer through USB and hit the PlatformIO: Upload command in VSCode. The command compiled and uploaded my firmware binary to the device.</span></p>
<h2><span data-preserver-spaces="true">Update the device over the wifi</span></h2>
<p><span data-preserver-spaces="true">Now that my device is set up for the first time, I can use ESPHome to upload to the device over the wifi.</span></p>
<p><span data-preserver-spaces="true">However, in my case, I had a problem: the ESPHome cannot find the device over the network using its name devicename.local. When I check the devices connected to my network, I can see my  device and I can ping it! But the ESPHome is still blind to it.</span></p>
<p><span data-preserver-spaces="true">As a workaround, I had to use the property <em>use_address</em> to give an explicit IP address to the node. I gave it the same IP address the DHCP already has given it before, so the wifi section of the YAML file became like this:</span></p>
<pre class="brush: yaml; title: ; notranslate">
wifi:
  ssid: &quot;wifi-ssid&quot;
  password: &quot;wifi-password&quot;
  use_address: 192.168.0.21
</pre>
<p><span data-preserver-spaces="true">After doing that I managed to upload new changes over the wifi. (Please if you know a better solution let me know :)).</span></p>
<h2><span data-preserver-spaces="true">Adding the device to Home Assistant</span></h2>
<p><span data-preserver-spaces="true">In the Home Assistant dashboard, I navigated to Configuration menu item on the left, hit Integrations, and then at the bottom right corner hit ADD INTEGRATION. Once I am represented with a dialogue I searched for ESPHome.</span></p>
<p><img fetchpriority="high" decoding="async" class=" wp-image-798 alignnone" src="https://www.emadashi.com/wp-content/uploads/2021/01/esphome-integration.png" alt="ESPHome Integration" width="606" height="328" srcset="https://www.emadashi.com/wp-content/uploads/2021/01/esphome-integration.png 832w, https://www.emadashi.com/wp-content/uploads/2021/01/esphome-integration-300x162.png 300w, https://www.emadashi.com/wp-content/uploads/2021/01/esphome-integration-768x415.png 768w, https://www.emadashi.com/wp-content/uploads/2021/01/esphome-integration-660x357.png 660w" sizes="(max-width: 606px) 100vw, 606px" /></p>
<p><span data-preserver-spaces="true">I put the IP address of the device and magic happens! Under Devices I could see my device, and when I navigated to the details I saw the LED control Entity.</span></p>
<p><img decoding="async" class="size-large wp-image-799 alignnone" src="https://www.emadashi.com/wp-content/uploads/2021/01/led-control-1024x409.png" alt="LED Control Display" width="665" height="266" srcset="https://www.emadashi.com/wp-content/uploads/2021/01/led-control-1024x409.png 1024w, https://www.emadashi.com/wp-content/uploads/2021/01/led-control-300x120.png 300w, https://www.emadashi.com/wp-content/uploads/2021/01/led-control-768x306.png 768w, https://www.emadashi.com/wp-content/uploads/2021/01/led-control-660x263.png 660w, https://www.emadashi.com/wp-content/uploads/2021/01/led-control.png 1118w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<h2><span data-preserver-spaces="true">Connecting the moisture sensor</span></h2>
<p><span data-preserver-spaces="true">Ok great, so far so good, but we should not forget what we are here for: a moisture sensor!</span></p>
<p><span data-preserver-spaces="true">I followed </span><a class="editor-rtfLink" href="https://www.youtube.com/watch?v=pFQaFnqpOtQ" target="_blank" rel="noopener"><span data-preserver-spaces="true">this video</span></a><span data-preserver-spaces="true">, but since my device is not Arduino, I had to figure out which pin I should use, and it was pin 33. Thanks to the form factor for the M5 Atom Lite, I only needed jumper wire.</span></p>
<p><img loading="lazy" decoding="async" class=" wp-image-802 alignnone" src="https://www.emadashi.com/wp-content/uploads/2021/01/soil-sensor-with-m5-atom-768x1024.jpg" alt="Soil Sensor with M5 Atom Lite" width="418" height="558" srcset="https://www.emadashi.com/wp-content/uploads/2021/01/soil-sensor-with-m5-atom-768x1024.jpg 768w, https://www.emadashi.com/wp-content/uploads/2021/01/soil-sensor-with-m5-atom-225x300.jpg 225w, https://www.emadashi.com/wp-content/uploads/2021/01/soil-sensor-with-m5-atom-660x880.jpg 660w, https://www.emadashi.com/wp-content/uploads/2021/01/soil-sensor-with-m5-atom.jpg 1080w" sizes="(max-width: 418px) 100vw, 418px" /></p>
<p>&nbsp;</p>
<h2><span data-preserver-spaces="true">Programming for the moisture sensor</span></h2>
<p><span data-preserver-spaces="true">Now, all that I have to do is to search for how to configure my YAML file and add an Entity to read data from the moisture sensor. When I searched ESPHome, I couldn&#8217;t find a straightforward way to do that, but I stumbled upon the </span><a class="editor-rtfLink" href="https://esphome.io/components/sensor/adc.html?highlight=adc" target="_blank" rel="noopener"><span data-preserver-spaces="true">Analog to Digital Sensor</span></a><span data-preserver-spaces="true">, and it appeared to be the answer.</span></p>
<p><span data-preserver-spaces="true">So I added the following segment to the YAML file (Valeria is the name of our plant :D):</span></p>
<pre class="brush: yaml; title: ; notranslate">
sensor:
  - platform: adc
    pin: 33
    name: &quot;Valeria&quot;
    update_interval: 500ms
    attenuation: 11db
    filters:
</pre>
<p><span data-preserver-spaces="true">Of course, the update interval is too excessive, but it is good for debugging purposes when you dip the sensor in a cup of water.</span></p>
<p><strong><span data-preserver-spaces="true">Important Note</span></strong><span data-preserver-spaces="true">: depending on the voltage of the sensor, you need to tune the attenuation property, the default is 0db, and I had to change it to 11db. Read the documentation of the Analog to Digital Sensor above for more information.</span></p>
<p><span data-preserver-spaces="true">In ESPHome, I compiled and uploaded the new code, and managed to see the voltage readings next to the LED Entity, success! However, it was basic readings, and I needed a percentage. I found this </span><a class="editor-rtfLink" href="https://www.reddit.com/r/homeassistant/comments/bqw3dw/esphome_and_home_assistant_soil_capacitance/" target="_blank" rel="noopener"><span data-preserver-spaces="true">post on Reddit</span></a><span data-preserver-spaces="true"> when I was trying to figure out the Entity, and they already solved it for me :).<br />
</span></p>
<p><span data-preserver-spaces="true">So the code below uses the Filter attribute. It takes the raw value of the readings, and passes it as a parameter to the subsequent function to return the result accordingly:</span></p>
<pre class="brush: yaml; title: ; notranslate">
sensor:
  - platform: adc
    pin: 33
    name: &quot;Valeria&quot;
    update_interval: 500ms
    attenuation: 11db
    filters:
    - lambda: |-
          if (x &gt; 3.74) {
            return 0;
          } else if (x &lt; 1.53) {
            return 100;
          } else {
            return (3.74-x) / (3.74-2.85) * 100.0;
          }

</pre>
<p><span data-preserver-spaces="true">Of course, you have to find your lowest and highest raw readings to get the right formula for your sensor. In this case, the highest was 3.74, and the lowest was 1.53. (<strong>Update</strong>: these are not really accurate values, which explains why I get more than 100 in my video above, I also didn&#8217;t remove the label &#8220;v&#8221;, embarrassing!)<br />
</span></p>
<h2><span data-preserver-spaces="true">Too much power consumption, let&#8217;s use Deep Sleep</span></h2>
<p><span data-preserver-spaces="true">The M5 Atom Lite is a small microcontroller, but this doesn&#8217;t mean that it doesn&#8217;t consume a lot of power. Putting this in a plant pot powered by battery will not last long.<br />
</span></p>
<p><span data-preserver-spaces="true">The good thing is that we can use the Deep Sleep mode, once the device is put in deep sleep mode, it will reduce power consumption and the battery will last longer depending on how long you put the device in this mode. For more information about ESP deep sleep, check the </span><a class="editor-rtfLink" href="https://randomnerdtutorials.com/esp8266-deep-sleep-with-arduino-ide/" target="_blank" rel="noopener"><span data-preserver-spaces="true">following article</span></a><span data-preserver-spaces="true">.</span></p>
<p><span data-preserver-spaces="true">To put the device in deep sleep using ESPHome, we will update our YAML to include the Deep Sleep component:</span></p>
<pre class="brush: yaml; title: ; notranslate">
deep_sleep:
  id: deep_sleep_1
  run_duration: 10s
  sleep_duration: 2min
</pre>
<p><span data-preserver-spaces="true">This will put the device into deep sleep mode for 2 minutes, and then will wake up for 10 seconds to allow the other components to do their job, and then will sleep again for 2 minutes.</span></p>
<h2><span data-preserver-spaces="true">But Deep Sleep has a problem&#8230;</span></h2>
<p><span data-preserver-spaces="true">There is a small problem, though, when you put the device in deep sleep mode: the device will shut down a lot of its capabilities, including CPU and wifi.</span></p>
<p><span data-preserver-spaces="true">This means that the device will not be reachable for two minutes, and will only stay awake for 10 seconds. So if we want to update the firmware, this will be challenging.</span></p>
<p><span data-preserver-spaces="true">So how can we solve this problem? well, if we can prevent the deep sleep mode the FIRST thing when the device wakes up, then we can update its firmware. Once we update the firmware, we re-enable deep sleep (not my genius idea, this is a common practice :))<br />
</span></p>
<p><span data-preserver-spaces="true">How to achieve this I hear you say? The answer is MQTT. MQTT is a lightweight protocol to transmit messages between devices. The biggest advantage of this protocol is the persisted message concept: a client can push a message to the broker (server), and the message will stay there until another client (or same client) sends a new value to overwrite it. (also not my idea :D)<br />
</span></p>
<p><span data-preserver-spaces="true">So if we push a message to the broker with a value like &#8220;turn off deep sleep&#8221;, and we configure the device to read from this broker the first thing when it wakes up, then we can achieve our goal!</span></p>
<p><span data-preserver-spaces="true">Luckily this is easy with ESPHome, we need to update our YAML file to use the MQTT component (God I love ESPHome!):</span></p>
<pre class="brush: yaml; title: ; notranslate">
mqtt:
  broker: 192.168.0.231
  port: 1883
  on_message:
    - topic: ota_mode
      payload: 'ON'
      then:
        - deep_sleep.prevent: deep_sleep_1
    - topic: sleep_mode
      payload: 'ON'
      then:
        - deep_sleep.enter: deep_sleep_1
</pre>
<p><span data-preserver-spaces="true">The above segment will program the device so that it will read a message from the server 192.168.0.231, specifically from the Topic &#8220;ota_mode&#8221; (the name of the topic can be anything you want). In the case there is a message, we check the payload, if it equals to ON, then we prevent the deep sleep component we configured above. However, if there is another message under the topic sleep_mode, then go back to sleep mode.</span></p>
<p><span data-preserver-spaces="true">Ideally, you don&#8217;t want two messages to represent one state, but let&#8217;s just go with this flow for now. Check </span><a class="editor-rtfLink" href="https://esphome.io/components/mqtt.html?highlight=mqtt" target="_blank" rel="noopener"><span data-preserver-spaces="true">the documentation of the MQTT Component</span></a><span data-preserver-spaces="true"> to see how you can use Lambdas for tighter control (not AWS Lambda!).</span></p>
<h2><span data-preserver-spaces="true">Oops, but we don&#8217;t have an MQTT server!</span></h2>
<p><span data-preserver-spaces="true">Did I mention I love Docker? We run an MQTT server in a container on NAS just like we did for Home Assistant and ESPHome above. For that, I chose the </span><a class="editor-rtfLink" href="https://mosquitto.org/" target="_blank" rel="noopener"><span data-preserver-spaces="true">Mosquitto server</span></a><span data-preserver-spaces="true">, which already has a </span><a class="editor-rtfLink" href="https://registry.hub.docker.com/_/eclipse-mosquitto/" target="_blank" rel="noopener"><span data-preserver-spaces="true">container image</span></a><span data-preserver-spaces="true">.</span></p>
<p><span data-preserver-spaces="true">The only thing I want to bring your attention to is that I mapped a file to the container on the path /mosquitto/config/mosquitto.conf to host the configuration. And I had the following configuration content:</span></p>
<pre class="brush: plain; title: ; notranslate">
allow_anonymous true

listener 1883
</pre>
<p><span data-preserver-spaces="true">If you don&#8217;t put the second line, the server will only accept messages from clients on the same machine. Please note as well that this is not a secure setup, so please be careful with your choices.</span></p>
<p><span data-preserver-spaces="true">Now, once I want to put my device OUT of sleep mode, I just send a message to the topic &#8220;ota_mode&#8221; with the value ON. And make sure that the topic &#8220;sleep_mode&#8221; doesn&#8217;t have the value ON. To do that I use the MQTT client &#8220;</span><a class="editor-rtfLink" href="https://mqtt-explorer.com/" target="_blank" rel="noopener"><span data-preserver-spaces="true">MQTT Explorer</span></a><span data-preserver-spaces="true">&#8220;, but you can also run the following command on your NAS through SSH:</span></p>
<p><span data-preserver-spaces="true">docker exec -it [nameOfMosquittoContainerOnNas] mosquitto_pub -V mqttv311 -h localhost -d -t ota_mode &#8220;ON&#8221;</span></p>
<p>&nbsp;</p>
<h2>Conclusion</h2>
<p>That was actually a lot of fun, and it&#8217;s just astonishing how good Home Assistant and ESPHome is. I am usually suspecious of the quality and efficiency of any product that generates code to achieve something, especially from a DSL-like language. In this case, things look pretty solid!</p>
<p>Let me know if you have any questions about this setup, I&#8217;d love hear from you, and I hope this helps you in your journey.</p>
<p>The post <a href="https://www.emadashi.com/2021/01/m5-atom-lite-home-assistant-esphome-and-capacitive-soil-sensor/">M5 Atom Lite, Home Assistant, ESPHome, and Capacitive Soil Sensor</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2021/01/m5-atom-lite-home-assistant-esphome-and-capacitive-soil-sensor/feed/</wfw:commentRss>
			<slash:comments>14</slash:comments>
		
		
			</item>
		<item>
		<title>Contributing to Open Source Software with Project KEDA</title>
		<link>https://www.emadashi.com/2020/11/contributing-to-open-source-software-with-project-keda/</link>
					<comments>https://www.emadashi.com/2020/11/contributing-to-open-source-software-with-project-keda/#respond</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Mon, 23 Nov 2020 12:54:23 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=762</guid>

					<description><![CDATA[<p>Summary This post explains my latest experience in contributing to open source software with project KEDA. Last week, the KEDA team have accepted and merged a Pull Request I created to support Pod Identity in the Event Hub scaler. @Simon Wight, an amazing community person, encouraged me to write about my experience (cheers for the nudge Simon :)), and this post is that. I will… <span class="read-more"><a href="https://www.emadashi.com/2020/11/contributing-to-open-source-software-with-project-keda/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2020/11/contributing-to-open-source-software-with-project-keda/">Contributing to Open Source Software with Project KEDA</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2 id="introduction" class="code-line code-line" data-line="0">Summary</h2>
<p class="code-line code-line" data-line="2">This post explains my latest experience in contributing to open source software with project KEDA.<br />
Last week, the <a title="https://github.com/kedacore/keda" href="https://github.com/kedacore/keda" data-href="https://github.com/kedacore/keda">KEDA</a> team have accepted and merged a <a title="https://github.com/kedacore/keda/pull/1305" href="https://github.com/kedacore/keda/pull/1305" data-href="https://github.com/kedacore/keda/pull/1305">Pull Request I created</a> to support <a title="https://github.com/Azure/aad-pod-identity/tree/master/charts/aad-pod-identity#configuration" href="https://github.com/Azure/aad-pod-identity/tree/master/charts/aad-pod-identity#configuration" data-href="https://github.com/Azure/aad-pod-identity/tree/master/charts/aad-pod-identity#configuration">Pod Identity</a> in the <a title="https://keda.sh/docs/2.0/scalers/azure-event-hub/" href="https://keda.sh/docs/2.0/scalers/azure-event-hub/" data-href="https://keda.sh/docs/2.0/scalers/azure-event-hub/">Event Hub</a> scaler. <a title="https://twitter.com/simonwaight" href="https://twitter.com/simonwaight" data-href="https://twitter.com/simonwaight">@Simon Wight</a>, an amazing community person, <a title="https://twitter.com/simonwaight/status/1328519554016788481?s=20" href="https://twitter.com/simonwaight/status/1328519554016788481?s=20" data-href="https://twitter.com/simonwaight/status/1328519554016788481?s=20">encouraged me</a> to write about my experience (cheers for the nudge Simon :)), and this post is that.</p>
<p class="code-line code-line" data-line="2">I will explain how it all started, what I did to get it done, and all the fun and challenges that happened during that.</p>
<h2 id="how-it-all-started" class="code-line code-line" data-line="4">How it all started</h2>
<p class="code-line code-line" data-line="6">My relation with KEDA started in mid 2019, I think it was a presentation by <a title="https://twitter.com/jeffhollan" href="https://twitter.com/jeffhollan" data-href="https://twitter.com/jeffhollan">Jeff Hollan</a>. At that time I was already working with Kubernetes, and I was already familiar <a title="https://azure.microsoft.com/en-us/services/functions/" href="https://azure.microsoft.com/en-us/services/functions/" data-href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a>. I have always been curious about scaling, whether it was Azure Functions or Containers, and KEDA came to complete this missing piece.</p>
<p class="code-line code-line" data-line="8">This explains the &#8220;<em>interest</em>&#8221; part of the contribution story, because no one can contribute to open source project, and give from their own time, unless the project is of interest to them. You need this spark to keep you going, because it&#8217;s not easy and at some point you WILL suspect yourself and say: &#8220;Oh man why am I doing this?!&#8221;.</p>
<p class="code-line code-line" data-line="10">Later in the year, I also was interested in <a title="https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/" href="https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/" data-href="https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/">learning Go</a>, and this brought me closer to the idea that I can contribute to a project like KEDA.</p>
<h2 id="one-step-at-a-time" class="code-line code-line" data-line="13">One Step at a Time</h2>
<p class="code-line code-line" data-line="15">The Pull Request (PR) that sparked this post was not my first contribution to KEDA, but it is definitely the biggest up until the moment of writing these words. I had to go by smaller steps, otherwise I would have been frustrated and left the thing from the beginning. So how did it start?</p>
<h2 id="understanding-the-code-and-contributing-docs" class="code-line code-line" data-line="17">Understanding The Code, and Contributing Docs</h2>
<p class="code-line code-line" data-line="19">Before I had the intention to contribute KEDA, I was curious to understand how it works. I have always believed that in order to use a tool efficiently, you have to go one level of abstraction deeper, and if I wanted to understand how KEDA works then I need to check out the code.</p>
<p class="code-line code-line" data-line="21">So I did that! I cloned the code, and I started examining it, trying to understand it. This didn&#8217;t only give me an advantage of understanding KEDA, it was a very good way to learn Go itself. It&#8217;s worth mentioning here that I didn&#8217;t even try to build it; I didn&#8217;t have the intention to contribute at that moment and I didn&#8217;t want to go through the hassle of installing the prerequisites.</p>
<p class="code-line code-line" data-line="24">Ater I became comfortable with how it works, I noticed that there are gaps in the documentation that are key to using KEDA in its full potential. So I decided to contribute to the documentation. I created <a title="https://github.com/kedacore/keda/pull/372" href="https://github.com/kedacore/keda/pull/372" data-href="https://github.com/kedacore/keda/pull/372">my first PR</a> to the docs, explaining how to write your own External Scaler.</p>
<p class="code-line code-line" data-line="26">This had two major benefits that helped in my future PRs:</p>
<h3 id="1-i-became-familiar-with-the-maintainers" class="code-line code-line" data-line="28">1. I became familiar with the maintainers</h3>
<p class="code-line code-line" data-line="30">I joined the <a title="https://kubernetes.slack.com/archives/CKZJ36A5D" href="https://kubernetes.slack.com/archives/CKZJ36A5D" data-href="https://kubernetes.slack.com/archives/CKZJ36A5D">KEDA Slack channel</a> and had a couple of conversations with the maintainers on what is missing and what I intended to cover. This created a connection with the maintainers and allowed them, and myself, to understand the expectations, their method, and way of thinking. In my case the maintainers were: <a title="https://github.com/tomkerkhove" href="https://github.com/tomkerkhove" data-href="https://github.com/tomkerkhove">Tom Kerkhove</a>, <a title="https://github.com/zroubalik" href="https://github.com/zroubalik" data-href="https://github.com/zroubalik">Zbynek Roubalik</a>, and <a title="https://github.com/ahmelsayed" href="https://github.com/ahmelsayed" data-href="https://github.com/ahmelsayed">Ahmed ElSayed</a> who are FANTASTIC people; they are helpful, encouraging, and appreciating, kudos to them!</p>
<h3 id="2-i-know-more-about-the-code-now" class="code-line code-line" data-line="32">2. I know more about the code now</h3>
<p class="code-line code-line" data-line="34">After that, I was in a much better position to contribute code, I know how things work, in general, I know where to find code and how to navigate it.</p>
<p class="code-line code-line" data-line="36">This also put me in a good position to speak about it in user groups and conferences, and even give workshops.</p>
<p class="code-line code-line" data-line="38">In addition to that, I started <a title="https://www.youtube.com/watch?v=rfgzs5xnGFI&amp;list=PL77QwZkeof9kzM6uo1brnMY1XwWk9bt8U" href="https://www.youtube.com/watch?v=rfgzs5xnGFI&amp;list=PL77QwZkeof9kzM6uo1brnMY1XwWk9bt8U" data-href="https://www.youtube.com/watch?v=rfgzs5xnGFI&amp;list=PL77QwZkeof9kzM6uo1brnMY1XwWk9bt8U">writing an External Scaler</a> live on <a title="https://twitch.tv/emadashi" href="https://twitch.tv/emadashi" data-href="https://twitch.tv/emadashi">Twitch</a>. External Scalers are not part of KEDA&#8217;s binaries; they are an independent deployments that uses gRPC to communicate with KEDA.</p>
<p class="code-line code-line" data-line="40">All of this increased my bond with the project.</p>
<h2 id="contributing-the-first-lines-of-code" class="code-line code-line" data-line="43">Contributing The First Lines of Code</h2>
<p class="code-line code-line" data-line="45">This is where things get harder, but also more exciting!</p>
<h3 id="lets-just-build-it-for-now" class="code-line code-line" data-line="47">Let&#8217;s just build it for now!</h3>
<p class="code-line code-line" data-line="49">I already have cloned the repository, and already navigated it, now it&#8217;s time to run it. I didn&#8217;t want to do anything except a successful build and deployment; if I do these two successfully, I would have finished more than two thirds my way towards the first commit!</p>
<p class="code-line code-line" data-line="51">The reason is that there can be a lot of things that go wrong, even for such a simple task. You are setting a new environment from scratch, and making sure all the prerequisites and dependencies are installed the right way with the right version.</p>
<h3 id="reading-the-contribution-guide" class="code-line code-line" data-line="53">Reading the Contribution guide</h3>
<p class="code-line code-line" data-line="55">So I read the <a title="https://github.com/kedacore/keda/blob/main/CONTRIBUTING.md" href="https://github.com/kedacore/keda/blob/main/CONTRIBUTING.md" data-href="https://github.com/kedacore/keda/blob/main/CONTRIBUTING.md">contribution guide</a>, this is where you should start with any OSS you want to contribute to. Rather than fighting your way by &#8220;discovering&#8221; code, reading the contribution guide will show you at least the entry point. (<em>Dah! but we also don&#8217;t read manuals to set up a washing machine, do we now!</em>)</p>
<p class="code-line code-line" data-line="57">However, the documentation was not complete at that time, and I had to do some discovery in order to have a successful build and deployment, and the right place for that discovery was <strong><a title="https://github.com/kedacore/keda/blob/main/Makefile" href="https://github.com/kedacore/keda/blob/main/Makefile" data-href="https://github.com/kedacore/keda/blob/main/Makefile">the Makefile</a></strong> (or the build script in other repos).</p>
<p class="code-line code-line" data-line="59">Following the Contribution guide, and wiggling my way through reading the Makefile, I managed to have a successful build and local deployment. This was another opportunity to contribute to the documentation, and <a title="https://github.com/kedacore/keda/pull/443/files" href="https://github.com/kedacore/keda/pull/443/files" data-href="https://github.com/kedacore/keda/pull/443/files">I did</a> (along with other stuff).</p>
<h3 id="finding-the-smallest-code-contribution-possible" class="code-line code-line" data-line="61">Finding the smallest code contribution possible</h3>
<p class="code-line code-line" data-line="63">Ok, now I am ready! I want to find the smallest contribution that doesn&#8217;t require a lot of effort or knowledge. The easiest way to find that out is to <strong>search for bugs</strong> in the repo&#8217;s Issues.</p>
<p class="code-line code-line" data-line="65">Why a bug? because solving bugs is like solving a jigsaw puzzle that is already very close to being completed; a lot of the pieces are already there, you just need to fit the last couple of ones. Also, the expectation is very clear; this is why maintainers classify it as a bug, so achieving the result should be straight forward.</p>
<p class="code-line code-line" data-line="67">For my luck, when I was trying to run my deployment locally, I was using an sample that had a bug already! I thought it was something I did, so I searched for it in the repo&#8217;s Issues, and I found that someone <a title="https://github.com/kedacore/keda/issues/319" href="https://github.com/kedacore/keda/issues/319" data-href="https://github.com/kedacore/keda/issues/319">already reported it</a>, perfect! It seems it&#8217;s a legitimate bug, and maybe I should try solving it.</p>
<p class="code-line code-line" data-line="69">So I rolled my sleaves, opened the code, <strong>searched for that error string</strong> in the code, and found it. Traversing it back, I understood what the problem was, and thus the solution was <a href="https://github.com/kedacore/keda/pull/404">relatively simple</a>.</p>
<h3 id="code-contribution-workflow-and-creating-the-pr" class="code-line code-line" data-line="72">Code contribution workflow, and creating the PR</h3>
<p class="code-line code-line" data-line="74">So what does the code change workflow look like?</p>
<p data-line="74"><img loading="lazy" decoding="async" class="size-large wp-image-765 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/11/repo-structure-1024x816.jpg" alt="Structure of Repositories" width="665" height="530" srcset="https://www.emadashi.com/wp-content/uploads/2020/11/repo-structure-1024x816.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/11/repo-structure-300x239.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/11/repo-structure-768x612.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/11/repo-structure-660x526.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/11/repo-structure.jpg 1253w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p class="code-line code-line" data-line="76">This will depend on how comfortable you are with Git, but I do the following:</p>
<ol>
<li class="code-line code-line" data-line="78">Fork the repo to my GitHub account</li>
<li class="code-line code-line" data-line="79">Clone my new fork to my disk (name the remote &#8220;eashi&#8221;)</li>
<li class="code-line code-line" data-line="80">Add the original repo as another remote &#8220;origin&#8221;. So by now I have two remotes: &#8220;origin&#8221; from the remote repo, and &#8220;eashi&#8221; that is my fork. (you can swap the names, up to you)</li>
<li class="code-line code-line" data-line="81">Create a branch for the new code</li>
<li class="code-line code-line" data-line="82">Push the new branch to my repo</li>
<li class="code-line code-line" data-line="83">Create a PR from the new branch in &#8220;eashi&#8221; to the &#8220;main&#8221; branch in the &#8220;origin&#8221; repo.</li>
<li class="code-line code-line" data-line="84">After the maintainers merge the PR to the &#8220;main&#8221; branch in the project, I pull from the &#8220;main&#8221; in &#8220;origin&#8221; to the &#8220;main&#8221; branch in &#8220;eashi&#8221;.</li>
</ol>
<p>&nbsp;</p>
<p class="code-line code-line" data-line="86">One step I didn&#8217;t mention it above: I pull from the &#8220;origin main&#8221; to my repo&#8217;s &#8220;main&#8221; regularly if necessary.</p>
<h3 id="creating-the-pr" class="code-line code-line" data-line="89">Creating the PR</h3>
<p class="code-line code-line " data-line="91">After I ran the code and validated that my code indeed solves the problem, I created a PR.</p>
<p class="code-line code-line" data-line="93">The description of the PR should make <strong>the goal of the code change clear</strong>, should describe <strong>how this PR achieves this goal</strong>, and it&#8217;s best if it <strong>includes reference to the issue</strong> that the PR is established upon.</p>
<p class="code-line code-line" data-line="95">Most of the projects these days provide a check-list of prerequisites that have to be met before the PR can be merged and accepted. Maintainers try to make this easy by providing a template, when the contributor creates the PR the initial description of the PR would explain how the PR should be structured.</p>
<p class="code-line code-line" data-line="97">In KEDA&#8217;s case, it&#8217;s 4 items:</p>
<ul>
<li class="code-line code-line" data-line="98">Commits are signed with Developer Certificate of Origin (DCO)</li>
<li class="code-line code-line" data-line="99">Tests have been added</li>
<li class="code-line code-line" data-line="100">A PR is opened to update the documentation on <a title="https://github.com/kedacore/keda-docs" href="https://github.com/kedacore/keda-docs" data-href="https://github.com/kedacore/keda-docs">https://github.com/kedacore/keda-docs</a></li>
<li class="code-line code-line" data-line="101">Changelog has been updated</li>
</ul>
<p><img loading="lazy" decoding="async" class="size-full wp-image-766 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/11/static-checklist.png" alt="manual checklist" width="801" height="187" srcset="https://www.emadashi.com/wp-content/uploads/2020/11/static-checklist.png 801w, https://www.emadashi.com/wp-content/uploads/2020/11/static-checklist-300x70.png 300w, https://www.emadashi.com/wp-content/uploads/2020/11/static-checklist-768x179.png 768w, https://www.emadashi.com/wp-content/uploads/2020/11/static-checklist-660x154.png 660w" sizes="(max-width: 801px) 100vw, 801px" /></p>
<p class="code-line code-line" data-line="103">I made sure that I ticked all the boxes in the PR.</p>
<p class="code-line code-line" data-line="105">In addition to this manual check-list, there is another automated check-list that is PRs go through: the code needs to compile, the tests need to run successfully, and the code should be scanned.</p>
<p class="code-line code-line" data-line="107">Of course these checks also different from project to project, and with the help of GitHub Actions this can run on every PR created, or every update to the PR.</p>
<p data-line="107"><img loading="lazy" decoding="async" class=" wp-image-767 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/11/automated-checklist.png" alt="" width="613" height="379" srcset="https://www.emadashi.com/wp-content/uploads/2020/11/automated-checklist.png 917w, https://www.emadashi.com/wp-content/uploads/2020/11/automated-checklist-300x185.png 300w, https://www.emadashi.com/wp-content/uploads/2020/11/automated-checklist-768x475.png 768w, https://www.emadashi.com/wp-content/uploads/2020/11/automated-checklist-660x408.png 660w" sizes="(max-width: 613px) 100vw, 613px" /></p>
<h3 id="be-patient-now" class="code-line code-line" data-line="109">Be patient now!</h3>
<p class="code-line code-line" data-line="111">After you have created the PR, it will take time: days, sometimes weeks! DON&#8217;T be pushy, hasty, rude, or disappointed. <strong>Maintainers are humans, and they have families and priorities</strong>. This might not be their fulltime job, and this PR might NOT be the top of their priority.</p>
<p class="code-line code-line" data-line="113">It&#8217;s good that you have given from your time to contribute to the project, maintainers will really appreciate it, trust that if it takes time before they merge or comment on your PR doesn&#8217;t mean that they don&#8217;t value your contribution.</p>
<h3 id="receiving-feedback-on-pr-and-actioning-on-it" class="code-line code-line" data-line="116">Receiving feedback on PR, and actioning on it</h3>
<p class="code-line code-line" data-line="118">It&#8217;s very rare that the PR will be merged without comments or feedback, most of the time maintainers will have questions, at least. Any line of code is added to the repo is a responsibility, and it&#8217;s good for all parties, including you, to only allow code that is of good quality and of a good reason.</p>
<p class="code-line code-line" data-line="120"><strong>Don&#8217;t take the feedback personally</strong>, if you don&#8217;t agree with their comments try to have a good conversation about it, assume their best intentions and try to convince them why things should be done your way. If they are not convinced don&#8217;t be frustrated, after all it&#8217;s their responsibility to be that quality gate.</p>
<p class="code-line code-line" data-line="122">Just a reminder here, that we are still talking about a small contribution like a bug for example. In theory, this should have very little debate.</p>
<h2 id="but-i-want-to-contribute-a-bigger-and-more-important-code" class="code-line code-line" data-line="125">But I want to contribute a bigger and more important code</h2>
<p class="code-line code-line" data-line="127">This brings us to the <a title="https://github.com/kedacore/keda/pull/1305" href="https://github.com/kedacore/keda/pull/1305" data-href="https://github.com/kedacore/keda/pull/1305">PR that</a> sparked this post. After I got more comfortable with the project as explained above, I felt I can take a bigger change.</p>
<p class="code-line code-line" data-line="129">So I started looking for Issues that are little bigger than a bug, and I found one that is perfect for me! At that time, I was interested in Azure&#8217;s Pod Identity, I read a couple of articles about it and I got the basic concepts, but it was not too clear in my head.</p>
<p class="code-line code-line" data-line="131">The issue I found was &#8220;<a title="https://github.com/kedacore/keda/issues/994" href="https://github.com/kedacore/keda/issues/994" data-href="https://github.com/kedacore/keda/issues/994">support AAD Pod Identity authentication for azure event hubs</a>&#8220;. It&#8217;s something I am already interested in, it&#8217;s a little bit bigger than a bug, and it&#8217;s something I THINK I can deliver. <strong>Was I confident that I could do this? Not really</strong>, and that&#8217;s alright! If you find yourself in such a situation don&#8217;t worry, try to embark on the mission and you will learn your way through.</p>
<p class="code-line code-line" data-line="133">Sometimes there isn&#8217;t a clear Issue that makes things easier for contributors, <strong>in this case I urge you to reach out to the contributors</strong> on chat/Twitter or whatever means to express your interest in helping, they will guide you.</p>
<p class="code-line code-line" data-line="135">So I put a comment on the Issue to <strong><a title="https://github.com/kedacore/keda/issues/994#issuecomment-706170071" href="https://github.com/kedacore/keda/issues/994#issuecomment-706170071" data-href="https://github.com/kedacore/keda/issues/994#issuecomment-706170071">express my interest in doing it</a></strong>; there is a possibility that someone is already working on this Issue, and I don&#8217;t want to offend anyone, and I don&#8217;t to waste my effort.</p>
<h3 id="how-much-effort-was-it" class="code-line code-line" data-line="137">How much effort was it?</h3>
<p class="code-line code-line" data-line="138">From the minute I showed interest up until the PR was merged, it was 40 days. The change wasn&#8217;t big, it was mainly in two significant files, and around 60 lines of code. So where did the time go?!</p>
<ol>
<li class="code-line code-line" data-line="140">
<h4 id="investigating-and-researching" class="code-line code-line" data-line="140">Investigating and researching.</h4>
<p class="code-line code-line" data-line="141">A lot of my time was investigating and researching; trying to understand the libraries I am depending on, and trying to understand how Pod Identity REALLY works.</p>
</li>
<li class="code-line code-line" data-line="143">
<h4 id="setting-the-dev-environment" class="code-line code-line" data-line="143">Setting the dev environment</h4>
<p class="code-line code-line" data-line="144">I already have mentioned above that I contributed small code changes before, but when I wanted to do this code change I have messed up my dev environment by installing different versions of the dependencies.</p>
<p class="code-line code-line" data-line="146">This caused some disruption and urged me to go the &#8220;<a title="https://code.visualstudio.com/docs/remote/containers" href="https://code.visualstudio.com/docs/remote/containers" data-href="https://code.visualstudio.com/docs/remote/containers">Remote Containers</a>&#8221; path, for which there was already some guidance in the Contribution guide. However, for some reason things didn&#8217;t work for me, and I had to wiggle my way through again to set things up.</p>
<p class="code-line code-line" data-line="148">I wanted to have a separate docs contribution for that part, but magically the bad behaviours stopped appearing <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f937-1f3fb-200d-2642-fe0f.png" alt="🤷🏻‍♂️" class="wp-smiley" style="height: 1em; max-height: 1em;" />. Keep in mind that this might happen to you too :).</p>
</li>
<li class="code-line code-line" data-line="151">
<h4 id="designing" class="code-line code-line" data-line="151">Designing</h4>
<p class="code-line code-line" data-line="152">This might be a little bit too mouthful for such a change, but I was trying to achieve the goal of the issue with the least amount of disruption to the code, and still adhere to the general spirit of the code.</p>
</li>
<li class="code-line code-line" data-line="154">
<h4 id="troubleshooting" class="code-line code-line" data-line="154">Troubleshooting</h4>
<p class="code-line code-line" data-line="155">This one was really hard, because debugging in Kubernetes is not too straight forward. I had to fill the code with logging statements and re-deploy everytime I figured out that I am in a blind spot. Contrary to the traditional &#8220;put a break point&#8221; way.</p>
</li>
<li class="code-line code-line" data-line="157">
<h4 id="code-deploy-and-test-cycle" class="code-line code-line" data-line="157">Code, deploy, and test cycle</h4>
<p class="code-line code-line" data-line="158">The cycle of introducing the change, deploy it, run it, check the result, and then change code again was time consuming. Especially that it involved deploying to Azure AKS because the feature was Azure specific.&#8217;</p>
</li>
</ol>
<p class="code-line code-line" data-line="161">This is where most of my time went, all from late nights and weekends.</p>
<h3 id="having-an-azure-subscription" class="code-line code-line" data-line="163">Having an Azure Subscription</h3>
<p class="code-line code-line" data-line="165">I am fortunate enough to have a subscription that I am not paying money for from my own pocket, this allowed me to really contribute to this feature. If I didn&#8217;t have such a subscription it would have been expensive contribution to OSS for me.</p>
<p class="code-line code-line" data-line="167">Thankfully as well, Microsoft announced in Ignite that you can shut down an AKS cluster &#8220;<a href="https://docs.microsoft.com/en-us/azure/aks/start-stop-cluster">az aks stop/start..</a>&#8220;, that made a good difference :).</p>
<h2 id="congratulations-the-pr-is-merged-now-what" class="code-line code-line" data-line="170">Congratulations, The PR is Merged, Now What?</h2>
<p class="code-line code-line" data-line="172">This needs a celebration <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f389.png" alt="🎉" class="wp-smiley" style="height: 1em; max-height: 1em;" />! But this also means responsibility, I still worry at some point that my code might have a bug, but this comes with the package, and I believe that I have to keep an eye on the Issues to make sure I can fix whatever is reported.</p>
<h2 id="conclusion" class="code-line code-line" data-line="174">Conclusion</h2>
<p class="code-line code-line" data-line="176">It was a long journey, but it doesn&#8217;t have to be that way. Everybody is different; their priorities, capabilities, interest, time&#8230;etc.</p>
<p class="code-line code-line code-active-line" data-line="178">So this isn&#8217;t necessarily a guidance, but it&#8217;s my experience in contributing to OSS, and I hope it will help you navigate your way through. 🙂</p>
<p>The post <a href="https://www.emadashi.com/2020/11/contributing-to-open-source-software-with-project-keda/">Contributing to Open Source Software with Project KEDA</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2020/11/contributing-to-open-source-software-with-project-keda/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>&#8220;Azure Functions on Kubernetes&#8221; Talk With Integration Down Under Meetup</title>
		<link>https://www.emadashi.com/2020/06/azure-functions-on-kubernetes-talk-with-integration-down-under-meetup/</link>
					<comments>https://www.emadashi.com/2020/06/azure-functions-on-kubernetes-talk-with-integration-down-under-meetup/#respond</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Sun, 28 Jun 2020 05:18:44 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=745</guid>

					<description><![CDATA[<p>Earlier this month I was invited to talk about Azure Functions on Kubernetes at the Integration Down Under meetup. It&#8217;s an amazing meet up held by highly regarded professionals like from all around Australia. Below is the recording of the session, make sure to follow their channel because they post regularly. Also all feedback is… <span class="read-more"><a href="https://www.emadashi.com/2020/06/azure-functions-on-kubernetes-talk-with-integration-down-under-meetup/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2020/06/azure-functions-on-kubernetes-talk-with-integration-down-under-meetup/">&#8220;Azure Functions on Kubernetes&#8221; Talk With Integration Down Under Meetup</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Earlier this month I was invited to talk about Azure Functions on Kubernetes at the Integration Down Under meetup. It&#8217;s an amazing meet up held by highly regarded professionals like from all around Australia.</p>
<p>Below is the recording of the session, make sure to follow their channel because they post regularly. Also all feedback is welcome :).</p>
<p><iframe loading="lazy" src="https://www.youtube.com/embed/wbn4wZab_9Q" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>The post <a href="https://www.emadashi.com/2020/06/azure-functions-on-kubernetes-talk-with-integration-down-under-meetup/">&#8220;Azure Functions on Kubernetes&#8221; Talk With Integration Down Under Meetup</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2020/06/azure-functions-on-kubernetes-talk-with-integration-down-under-meetup/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>My Twitch Streaming Setup – Part 2 Software</title>
		<link>https://www.emadashi.com/2020/06/my-twitch-streaming-setup-part-2-software/</link>
					<comments>https://www.emadashi.com/2020/06/my-twitch-streaming-setup-part-2-software/#comments</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Sun, 21 Jun 2020 07:55:20 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=725</guid>

					<description><![CDATA[<p>This is part 2 of the three-parts blog post about my Twitch streaming setup. Hardware Software (this post) Humanware As I have said before, I have learned a lot from amazing streamers like @noopkat and @csharpfritz so you will find a lot of this content matches theirs. OBS Streaming Configuration OBS is the main software that streams to the streaming… <span class="read-more"><a href="https://www.emadashi.com/2020/06/my-twitch-streaming-setup-part-2-software/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2020/06/my-twitch-streaming-setup-part-2-software/">My Twitch Streaming Setup – Part 2 Software</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="code-line code-line" data-line="0">This is part 2 of the three-parts blog post about my Twitch streaming setup.</p>
<ul>
<li class="code-line code-line" data-line="2"><a title="https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/" href="https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/" data-href="https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/">Hardware</a></li>
<li class="code-line code-line" data-line="3">Software (this post)</li>
<li class="code-line code-line" data-line="4">Humanware</li>
</ul>
<p class="code-line code-line" data-line="6">As I have said before, I have learned a lot from amazing streamers like <a title="https://twitch.tv/noopkat" href="https://twitch.tv/noopkat" data-href="https://twitch.tv/noopkat">@noopkat</a> and <a title="https://twitch.tv/csharpfritz" href="https://twitch.tv/csharpfritz" data-href="https://twitch.tv/csharpfritz">@csharpfritz</a> so you will find a lot of this content matches theirs.</p>
<h2 id="obs-streaming-configuration" class="code-line code-line" data-line="10">OBS Streaming Configuration</h2>
<p class="code-line code-line" data-line="11"><a title="https://obsproject.com/" href="https://obsproject.com/" data-href="https://obsproject.com/">OBS</a> is the main software that streams to the streaming service (Twitch in my case). I used <a title="https://streamlabs.com/" href="https://streamlabs.com/" data-href="https://streamlabs.com/">Streamlabs</a> at the beginning, but because it&#8217;s just an abstraction over OBS, I faced some limitations when I wanted to try different plugins. So I preferred to play directly with the OBS itself.</p>
<p class="code-line code-line" data-line="13">Media is not really my expertise, and I don&#8217;t like to stray away from the default configuration OBS has for streaming, so I let OBS do the test and suggest the best configuration. The following is the suggested configuration:</p>
<ul>
<li class="code-line code-line" data-line="14">Output
<ul>
<li class="code-line code-line" data-line="15">Video Bitrate: 2500 Kbps</li>
<li class="code-line code-line" data-line="16">Encoder: Software (x264)</li>
<li class="code-line code-line" data-line="17">Audio Bitrate: 160</li>
</ul>
</li>
<li class="code-line code-line" data-line="18">Video:
<ul>
<li class="code-line code-line" data-line="19">Base (Canvas) Resolution: 1920&#215;1080</li>
<li class="code-line code-line" data-line="20">Output (scaled) Resolution: 1280&#215;720 (I am not sure if this is the best configuration, but this is what I am using now <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f937-1f3fb-200d-2642-fe0f.png" alt="🤷🏻‍♂️" class="wp-smiley" style="height: 1em; max-height: 1em;" />).</li>
<li class="code-line code-line" data-line="21">Common FPS Values: 60</li>
</ul>
</li>
<li class="code-line code-line" data-line="22">Audio:
<ul>
<li class="code-line code-line" data-line="23">Sample Rate: 44.1 kHz</li>
<li class="code-line code-line" data-line="24">Channels: Stereo</li>
</ul>
</li>
</ul>
<h2 id="obs-scene-setup" class="code-line code-line" data-line="26">OBS Scene Setup</h2>
<p class="code-line code-line" data-line="28">In OBS, the way you construct the scene is by creating layers of Sources, each Source can be an image, web page, video&#8230;etc. It needs a little bit of time getting used to, but you can check their website for more detailed guides.</p>
<p class="code-line code-line" data-line="30">Below is the scenes setup I use, and <a title="https://link" href="https://github.com/eashi/my-twitch-stream-setup" data-href="https://link">here</a> it is in an exported JSON format.</p>
<h3 id="the-starting-soon-scene" class="code-line code-line" data-line="32">The Starting Soon scene</h3>
<p><img loading="lazy" decoding="async" class="size-large wp-image-729 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/StartingSoon-1024x579.jpg" alt="Starting Soon" width="665" height="376" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/StartingSoon-1024x579.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/06/StartingSoon-300x170.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/06/StartingSoon-768x434.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/06/StartingSoon-1536x868.jpg 1536w, https://www.emadashi.com/wp-content/uploads/2020/06/StartingSoon-660x373.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/06/StartingSoon.jpg 1672w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p class="code-line code-line" data-line="34">When it&#8217;s time for my stream to start, I don&#8217;t start the meaty content of the stream instantly. Instead, I display &#8220;starting soon&#8221; scene to give a chance for the people to join it, this only lasts for a couple of minutes not more.</p>
<p class="code-line code-line" data-line="36">In this scene, I only show a small video in a loop, without exposing my microphone audio. Some, however, make it really cool like <a title="https://twitch.tv/csharpfritz" href="https://twitch.tv/csharpfritz" data-href="https://twitch.tv/csharpfritz">@csharpfrits</a> and <a title="https://twitch.tv/BaldBeardedBuilder" href="https://twitch.tv/BaldBeardedBuilder" data-href="https://twitch.tv/BaldBeardedBuilder">@BaldBeardedBuilder</a> displaying their shadows as they move to prepare for the stream.</p>
<h3 id="the-me-talking-scene" class="code-line code-line" data-line="38">The Me Talking scene</h3>
<p><img loading="lazy" decoding="async" class="size-large wp-image-730 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/MeTalking-1024x577.jpg" alt="Me Talking scene" width="665" height="375" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/MeTalking-1024x577.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/06/MeTalking-300x169.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/06/MeTalking-768x432.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/06/MeTalking-660x372.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/06/MeTalking.jpg 1151w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p class="code-line code-line" data-line="40">After displaying the Starting Soon scene for about two minutes, I bring the Me Talking scene up. It&#8217;s a focus on my face with a display of the chat overlay next to it. In the scene, I also display my twitter handle for people who don&#8217;t know me, they can instantly look me up on Twitter (I stole this from <a href="https://twtich.tv/davidwengier">@davidwengier</a> :D).</p>
<p class="code-line code-line" data-line="42">I use this scene to establish a connection with the viewers, without them being distracted by code. It feels more direct and clear. I usually do a recap here, I explain what we went through in the last session, and what is our plan for that day&#8217;s session.</p>
<p class="code-line code-line" data-line="44">In these scenes, I utilise my greens screen. My office is not the best office in the world, and having a green screen means I can put a soothing blue background. I haven&#8217;t gone crazy with my backgrounds but will experiment with this in the future.</p>
<p class="code-line code-line" data-line="46">The biggest trick in this one is to remember to switch to the Code scene; it happened twice when I jumped into the coding part while the scene displayed was still at my big face only, no code, *facepalm*! There are some plugins that allow automatic switch, but I haven&#8217;t checked them out.</p>
<p class="code-line code-line" data-line="48">I also display the chat-box using the <a title="https://streamlabs.com/obs-widgets/chat-box" href="https://streamlabs.com/obs-widgets/chat-box" data-href="https://streamlabs.com/obs-widgets/chat-box">Chat-box extension</a> from Streamlabs. I display this on all my scenes (ops! except for the Secret scene below, I just remembered while typing this <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f605.png" alt="😅" class="wp-smiley" style="height: 1em; max-height: 1em;" />). The reason why I display the chat is that Twitch doesn&#8217;t allow you to keep the videos as an archive, so I offload them to YouTube. Once the video is on YouTube, there will be no capture for the chat unless I record it part of the video.</p>
<p class="code-line code-line" data-line="50">I know if I stream on YouTube directly, the chat messages will be replayed alongside the recording of the stream, I think this is a beautiful feature. I might consider broadcasting on YouTube in the future, but I&#8217;m focusing on one platform for now.</p>
<h3 id="the-coding-scene" class="code-line code-line" data-line="52">The Coding scene</h3>
<p data-line="54"><img loading="lazy" decoding="async" class="size-large wp-image-731 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/CodingScene-1024x571.jpg" alt="Coding Scene" width="665" height="371" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/CodingScene-1024x571.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingScene-300x167.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingScene-768x428.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingScene-660x368.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingScene.jpg 1157w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p class="code-line code-line" data-line="54">The coding scene is the one I show most of the stream. I have a part (on the left) where I show the code editor/desktop/web pages, a part I where show the chat messages (top right), and a part I show my camera (bottom right).</p>
<p class="code-line code-line" data-line="56">You will notice that I totally separate the chat and the camera from the code editor, unlike other streamers who show their cameras and chat on top of the code editor. My main reason for this setup is that in rare cases I have to show something at the right corner of the screen, and my camera or the chat will obstruct it otherwise.</p>
<h3 id="the-secret-scene" class="code-line code-line" data-line="62">The Secret scene</h3>
<p><img loading="lazy" decoding="async" class="size-large wp-image-732 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/SecretScene-1024x575.jpg" alt="Secret Scene" width="665" height="373" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/SecretScene-1024x575.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/06/SecretScene-300x169.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/06/SecretScene-768x432.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/06/SecretScene-660x371.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/06/SecretScene.jpg 1155w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p data-line="63">In this scene, I show a funny video in a loop for people juggling knives. I show the video, my camera, and my audio.</p>
<p class="code-line code-line" data-line="63">Sometimes I need to display secret tokens or passwords on my screen. I have three monitors but I stream one of them only, so I could just simply move my editor to the other screen. However, moving the windows around is a little bit tricky because they don&#8217;t fill my window (check the &#8220;The Main Monitor&#8221; section below for why). Thus, I decided to have this Secret scene, it&#8217;s also funny 😀</p>
<p class="code-line code-line" data-line="67">I have streamed before writing a Visual Studio Code extension that hides YAML nodes that are secrets, but it uses regular expressions, and it was never published. My next stream by god willing will be a new extension that is built better using proper YAML parser (did I just say &#8220;build it again but better&#8221;?! we developers never change!).</p>
<h2 id="audio" class="code-line code-line" data-line="70">Audio</h2>
<p class="code-line code-line" data-line="71">I use <a title="https://rogueamoeba.com/freebies/soundflower/" href="https://rogueamoeba.com/freebies/soundflower/" data-href="https://rogueamoeba.com/freebies/soundflower/">Soundflower</a> to create the right audio setup. It is necessary to convert the audio output of your machine to be another audio source for the stream. <a title="https://link" href="https://www.youtube.com/watch?v=_lR0ef69WG0" data-href="https://link">Here is</a> a video on YouTube I found useful on how to set it up.</p>
<p class="code-line code-line" data-line="73"><img loading="lazy" decoding="async" class="size-full wp-image-733 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/audiosetup.jpg" alt="Audio Setup" width="637" height="468" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/audiosetup.jpg 637w, https://www.emadashi.com/wp-content/uploads/2020/06/audiosetup-300x220.jpg 300w" sizes="(max-width: 637px) 100vw, 637px" /></p>
<p class="code-line code-line" data-line="75">I use my H2N microphone mentioned in part 1 as the main microphone. Music is too distracting to me during coding, so I don&#8217;t play music at all.</p>
<h2 id="the-pilots-view-of-the-setup" class="code-line code-line" data-line="77">The Pilot&#8217;s View of the Setup</h2>
<p class="code-line code-line" data-line="79">This is how is my view when I am streaming, I have two external monitors and the laptop&#8217;s screen:</p>
<h3 id="the-main-monitor" class="code-line code-line" data-line="81">The Main Monitor</h3>
<p class="code-line code-line" data-line="83">This is where I write the code, and you can see that I have left some room on the right to fit the chat overlay so that it&#8217;s captured with the video, this will allow the chat to be a permanent part of the recording so that people who watch the video later on a different medium can relate to my comments on the chat.</p>
<p data-line="83"><img loading="lazy" decoding="async" class="size-large wp-image-734 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/CodingView-1024x576.jpg" alt="Coding View" width="665" height="374" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/CodingView-1024x576.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingView-300x169.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingView-768x432.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingView-1536x864.jpg 1536w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingView-660x371.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/06/CodingView.jpg 1920w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p class="code-line code-line" data-line="85">It needed some time to get used to because I am usually a full-screen guy, but after awhile you just don&#8217;t see the void.</p>
<h3 id="the-obs-monitor" class="code-line code-line" data-line="87">The OBS monitor</h3>
<p><img loading="lazy" decoding="async" class="size-large wp-image-735 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/ObsCodingView-1024x576.jpg" alt="OBS Coding View" width="665" height="374" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/ObsCodingView-1024x576.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/06/ObsCodingView-300x169.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/06/ObsCodingView-768x432.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/06/ObsCodingView-1536x864.jpg 1536w, https://www.emadashi.com/wp-content/uploads/2020/06/ObsCodingView-660x371.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/06/ObsCodingView.jpg 1920w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p class="code-line code-line" data-line="88">This is where I leave my OBS open, and where I click to switch scenes whenever I need to.</p>
<p class="code-line code-line" data-line="88">I also use it sometimes to read the chat from the chat overlay. This is not optimal as I often find myself squeezing my eye to read the small font. And because it&#8217;s away from the camera, it seems I &#8220;look away&#8221; from the audience to read what they are saying.</p>
<p class="code-line code-line" data-line="92">The only good thing about reading the messages from OBS itself is that I am reading from the same view source the audience read from. This way, if for some reason the overlay is not working I wouldn&#8217;t be reading something the audience is not seeing.</p>
<p class="code-line code-line" data-line="94">In my future streams, I will grab the link of the chat overlay and put it in a browser, and then squeeze that browser window next to my VSCode. This way I won&#8217;t look away from the audience to read their messages. (I tried this before when I was proof-reading this post, didn&#8217;t work because the chat overlay had a minimum width :(. Will update you when I find a better solution).</p>
<h3 id="the-laptop-monitor" class="code-line code-line" data-line="96">The laptop monitor</h3>
<p><img loading="lazy" decoding="async" class="size-large wp-image-736 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/twitchScreen-1024x640.jpg" alt="Twitch Screen" width="665" height="416" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/twitchScreen-1024x640.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/06/twitchScreen-300x188.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/06/twitchScreen-768x480.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/06/twitchScreen-1536x960.jpg 1536w, https://www.emadashi.com/wp-content/uploads/2020/06/twitchScreen-2048x1280.jpg 2048w, https://www.emadashi.com/wp-content/uploads/2020/06/twitchScreen-660x413.jpg 660w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p class="code-line code-line" data-line="98">I use this screen as an auxiliary monitor if required, recently I start opening the native Twitch app to see how things are going on the other streaming end. (I have no idea why at that time Twitch App was complaining about the internet :D).</p>
<h2 id="applescript" class="code-line code-line" data-line="100">AppleScript:</h2>
<p class="code-line code-line" data-line="101">I have been a Windows user for so man years of my life, I wish there is an equivalent for AppleScript in Windows (Maybe there is, let me know if there is). It automates many aspects of the MacOS: opening applications, resizing windows, changing settings, and a lot more.</p>
<p class="code-line code-line" data-line="103">I got this tip from <a title="https://twitter.com/noopkat" href="https://twitter.com/noopkat" data-href="https://twitter.com/noopkat">@noopkat</a> (like many other tips). Just before I start my streaming session I run my script and it opens OBS, opens a clean browser window, resizes my VSCode and place it in the right position&#8230;everything! You can have a look at it <a title="https://gist.github.com/eashi/8442e175d285ab23859080b7cf4471e0" href="https://gist.github.com/eashi/8442e175d285ab23859080b7cf4471e0" data-href="https://gist.github.com/eashi/8442e175d285ab23859080b7cf4471e0">here</a></p>
<h2 id="visual-studio-code" class="code-line code-line" data-line="106">Visual Studio Code</h2>
<p class="code-line code-line" data-line="108">After .NET Core, and MacBook becoming my main PC, my main IDE has become Visual Studio Code. I love its reliability, lightness, and extensibility.</p>
<p class="code-line code-line" data-line="110">There is a feature in VSCode called &#8220;ScreenCast Mode&#8221; (thanks <a title="https://twitter.com/shahiddev" href="https://twitter.com/shahiddev" data-href="https://twitter.com/shahiddev">@ShahidDev</a> for the tip), it prints the keys pressed down on the keyboard. This is a very good way to share with the audience the love of shortcuts.</p>
<p data-line="110"><img loading="lazy" decoding="async" class="size-full wp-image-737 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/ScreencastMode.jpg" alt="Screencast Mode" width="642" height="201" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/ScreencastMode.jpg 642w, https://www.emadashi.com/wp-content/uploads/2020/06/ScreencastMode-300x94.jpg 300w" sizes="(max-width: 642px) 100vw, 642px" /></p>
<p class="code-line code-line" data-line="113">There are tools that work on the OS level, but I haven&#8217;t used it yet. I have my eye on <a title="https://github.com/keycastr/keycastr" href="https://github.com/keycastr/keycastr" data-href="https://github.com/keycastr/keycastr">Keycaster</a>, haven&#8217;t tried it yet though.</p>
<h2 id="zsh-as-terminal" class="code-line code-line" data-line="115">Zsh as terminal</h2>
<p class="code-line code-line" data-line="116">I can&#8217;t really speak a lot about this one, it&#8217;s a feature-rich terminal that is also extensible, however, I haven&#8217;t really utilised a lot of its features. I hope <a title="https://github.com/tkoster" href="https://github.com/tkoster" data-href="https://github.com/tkoster">@tkoster</a> is not disappointed as he is my guide in anything related to Unix and Linux!</p>
<h2 id="twitch-channel-setup" class="code-line code-line" data-line="118">Twitch channel setup</h2>
<p class="code-line code-line" data-line="120">Truly, Twitch does NOT have the best UX for their platform, it&#8217;s confusing to say the least. However, they are constantly changing things around and try their best to improve.</p>
<p class="code-line code-line" data-line="122">The customisable page in your channel (at least at the time of writing these words) is the About page, you can add panels and custom content. Below are the panels I have.</p>
<p class="code-line code-line" data-line="124"><img loading="lazy" decoding="async" class="size-large wp-image-738 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/06/TwitchPanels-1024x497.jpg" alt="Twitch Panels" width="665" height="323" srcset="https://www.emadashi.com/wp-content/uploads/2020/06/TwitchPanels-1024x497.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/06/TwitchPanels-300x146.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/06/TwitchPanels-768x373.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/06/TwitchPanels-660x320.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/06/TwitchPanels.jpg 1174w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<h3 id="about-me-panel" class="code-line code-line" data-line="126">About Me panel</h3>
<p class="code-line code-line" data-line="127">This where the bio goes, I like to use the first singular on Twitch because it&#8217;s a direct conversation with the audience, it makes sense to say &#8220;hi, welcome to my channel&#8221; than &#8220;Emad is a developer&#8230;&#8221;.</p>
<p data-line="127">After I had this panel for a while, Twitch decided to display the About Me info you are in your profile in the About page, this somehow made this panel is redundant. However, the About Me in the profile is limited to 300 characters, this one you can write more, hence I renamed mine to &#8220;More About Me&#8221;!</p>
<h3 id="stream-schedule-panel" class="code-line code-line" data-line="129">Stream Schedule panel</h3>
<p class="code-line code-line" data-line="130">This is an <a title="https://dashboard.twitch.tv/extensions/naty2zwfp7vecaivuve8ef1hohh6bo-1.0.12" href="https://dashboard.twitch.tv/extensions/naty2zwfp7vecaivuve8ef1hohh6bo-1.0.12" data-href="https://dashboard.twitch.tv/extensions/naty2zwfp7vecaivuve8ef1hohh6bo-1.0.12">extension</a> from Streamlabs to show the visitors when your next stream is. Twitch has introduced their own Schedule page in the channel, so this one might be redundant.</p>
<p class="code-line code-line" data-line="132">I am not sure if this is really benefiting the audience though, at some point I was thinking of creating my own Google calendar for the streaming and let people subscribe to it so they know when I cancel one. I might still do it so stay tuned, and let me know if you would like that too.</p>
<h3 id="twitter-feed-panel" class="code-line code-line" data-line="134">Twitter feed panel</h3>
<p class="code-line code-line" data-line="135">There is nothing like Twitter to tell people who you really are (not really, it&#8217;s only 140 characters of text!). <a title="https://dashboard.twitch.tv/extensions/qcxdzgqw0sd1u50wqtwodjfd5dmkxz-1.0.0" href="https://dashboard.twitch.tv/extensions/qcxdzgqw0sd1u50wqtwodjfd5dmkxz-1.0.0" data-href="https://dashboard.twitch.tv/extensions/qcxdzgqw0sd1u50wqtwodjfd5dmkxz-1.0.0">This extenstion</a> provides a list of your most recent tweets.</p>
<h3 data-line="135">Chat Bot</h3>
<p>Up until writing this post, I didn&#8217;t have a chat bot. But in the last stream one of the audience sent &#8220;!theme&#8221;, and I didn&#8217;t have a bot to answer, so I stopped whatever I was doing and started showing my theme. If I had a chat bot setup this wouldn&#8217;t have happened.</p>
<p>The next step for after posting this is to set up a new Bot. There are so many bots out there, <a href="https://www.youtube.com/watch?v=hCPPTIX-bBM">this video</a> shows several of them. I am already leaning towards <a href="https://nightbot.tv/">nightbot</a>, but I also know that <a href="https://twitch.tv/csharpfritz">@chahrpfritz</a> has been working on one, so let&#8217;s see how it goes, probably will be another post so stay tuned or ping me if I don&#8217;t write about it ;).</p>
<h2 data-line="135">Summary</h2>
<p>As you can see, this is a space of a continuous change; you will find yourself keep changing setups, and tweaking settings here and there until you find the final setup, which soon will change after :D.</p>
<p>I hope this was beneficial, let me know if you need more information.</p>
<p>The post <a href="https://www.emadashi.com/2020/06/my-twitch-streaming-setup-part-2-software/">My Twitch Streaming Setup – Part 2 Software</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2020/06/my-twitch-streaming-setup-part-2-software/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>My Twitch Streaming Setup &#8211; Part 1 Hardware</title>
		<link>https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/</link>
					<comments>https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/#respond</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Tue, 19 May 2020 13:42:49 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=697</guid>

					<description><![CDATA[<p>There has a been a lot of interest lately about home studio setup for streaming and recording videos. In this post I will explain my Twitch streaming setup, and my experience thus far.It&#8217;s been less than a year for my journey in streaming on Twitch, and I am still learning and trying things out, so… <span class="read-more"><a href="https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/">My Twitch Streaming Setup &#8211; Part 1 Hardware</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>There has a been a lot of interest lately about home studio setup for streaming and recording videos. In this post I will explain my <a href="https://twitch.tv/emadashi">Twitch</a> streaming setup, and my experience thus far.It&#8217;s been less than a year for my journey in streaming on Twitch, and I am still learning and trying things out, so take these posts with this context.</p>
<p>While writing this post, I realised that it&#8217;s going to be a long one, so I will break it down into three posts:</p>
<p>1. Hardware (this article)<br />
2. <a href="https://www.emadashi.com/2020/06/my-twitch-streaming-setup-part-2-software/">Software</a><br />
3. Humanware</p>
<h2>Laptop</h2>
<p>At the beginning of my journey, I streamed from a Surface Pro 4 (Intel Core i7-6650U 2.2 to 3.4 GHz and 16G RAM). This worked fine when I streamed while working on pure Azure tasks that didn&#8217;t involve any CPU consumption. But when I started doing local development and compiling code, my frames started dropping and my audience started complaining about the quality of my stream.</p>
<p><span data-preserver-spaces="true">So when it was time for my Toolkit Allowance renewal (thanks </span><a class="_e75a791d-denali-editor-page-rtfLink" href="https://purple.telstra.com/" target="_blank" rel="noopener noreferrer"><span data-preserver-spaces="true">Telstra Purple</span></a><span data-preserver-spaces="true">!), I decided to bump to the best machine I could afford. I read many confusing opinions on the internet about the role of GPU in a stream, and I couldn&#8217;t decide whether to get a good GPU machine or good CPU machine. Since I don&#8217;t buy a machine every day, and I had some space to bump the budget in addition to the allowance, I decided to get both, GPU AND CPU :).</span></p>
<p>Now I stream from a Macbook Pro (32G RAM 2.3 GHz 8-Core Intel Core i9, Radeon Pro Vega 20 4 GB).<br />
The strongest voice I heard on the internet was that you want to concentrate on the CPU, but I will leave this homework for you. Needless to say, I don&#8217;t have a problem compiling, streaming, and recording at the same time now.</p>
<h2>Microphone</h2>
<p><img loading="lazy" decoding="async" class=" wp-image-713 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/microphone-1.jpg" alt="microphone" width="361" height="481" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/microphone-1.jpg 726w, https://www.emadashi.com/wp-content/uploads/2020/05/microphone-1-225x300.jpg 225w, https://www.emadashi.com/wp-content/uploads/2020/05/microphone-1-660x880.jpg 660w" sizes="(max-width: 361px) 100vw, 361px" /></p>
<p><span data-preserver-spaces="true">Long before I got into streaming I started a </span><a class="_e75a791d-denali-editor-page-rtfLink" href="https://dotnetarabi.com" target="_blank" rel="noopener noreferrer"><span data-preserver-spaces="true">podcast</span></a><span data-preserver-spaces="true">, and for some time I was looking for the right microphone, I needed a microphone that was of good audio quality AND has the functionality of recording in case I was on the go, and I found these in the </span><a class="_e75a791d-denali-editor-page-rtfLink" href="https://https://www.zoom.co.jp/products/handy-recorder/h2n-handy-recorder" target="_blank" rel="noopener noreferrer"><span data-preserver-spaces="true">Zoom H2N</span></a><span data-preserver-spaces="true">. I call this the awesome microphone, but I also believe that this is an overkill for most people who would like to start streaming or producing professional video content.</span></p>
<h2>Microphone Mount</h2>
<p><img loading="lazy" decoding="async" class=" wp-image-711 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/microphone-mount.jpg" alt="" width="516" height="387" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/microphone-mount.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/05/microphone-mount-300x225.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/05/microphone-mount-768x576.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/05/microphone-mount-660x495.jpg 660w" sizes="(max-width: 516px) 100vw, 516px" /></p>
<p>I got an unbranded microphone mount from eBay, you can see from the picture that I am hanging it to the bookshelf next to me, not to my desk; my desk is pretty thick and the base of the mount won&#8217;t fit on it. But as a nice outcome hanging it on the bookshelf, the noise coming from the keyboard is barely present.</p>
<p>I put the microphone&#8217;s gain to the highest; I stream in the night when the kids are asleep, and I am in a relatively quiet suburb. I position the microphone as close as possible to my face just before appearing in my camera. I am not 100% that the audience doesn&#8217;t get any pffff sound due to the high gain, but so far no one has complained :).</p>
<h2>Camera</h2>
<p><img loading="lazy" decoding="async" class=" wp-image-710 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/camera.jpg" alt="camera" width="451" height="338" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/camera.jpg 800w, https://www.emadashi.com/wp-content/uploads/2020/05/camera-300x225.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/05/camera-768x576.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/05/camera-660x495.jpg 660w" sizes="(max-width: 451px) 100vw, 451px" /></p>
<p><span data-preserver-spaces="true">I have the </span><a class="_e75a791d-denali-editor-page-rtfLink" href="https://www.logitech.com/en-us/product/hd-pro-webcam-c920" target="_blank" rel="noopener noreferrer"><span data-preserver-spaces="true">Logitech C920</span></a><span data-preserver-spaces="true">, it&#8217;s pretty very common amongst streamers and for a very good reason. It&#8217;s very well balanced between prices and features, I love the angle it takes, the quality of the picture, and the auto-focus. Having said that, the only way I am using it to record my face; I don&#8217;t do close up reviews to products and I don&#8217;t need to move it off the top of my screen.</span></p>
<h2>Keyboard</h2>
<p><img loading="lazy" decoding="async" class="size-full wp-image-708 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/keyboard.jpg" alt="keyboard" width="800" height="440" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/keyboard.jpg 800w, https://www.emadashi.com/wp-content/uploads/2020/05/keyboard-300x165.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/05/keyboard-768x422.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/05/keyboard-660x363.jpg 660w" sizes="(max-width: 800px) 100vw, 800px" /></p>
<p><span data-preserver-spaces="true">Late 2018 I bought one of the early models of </span><a class="_e75a791d-denali-editor-page-rtfLink" href="https://mechanicalkeyboards.com/shop/index.php?l=product_detail&amp;p=3917" target="_blank" rel="noopener noreferrer"><span data-preserver-spaces="true">Vortex Race 3</span></a><span data-preserver-spaces="true"> with Silver switch. This is absolutely not necessary for a successful stream, but mechanical keyboards are just luxurious nice feeling :D.</span></p>
<p><span data-preserver-spaces="true">I kinda regret the Silver switch as I tend to make too many mistakes while touch typing. If time goes back I would get a Red switch instead.</span></p>
<h2>USB Hub(s)</h2>
<p>I would have loved to get a proper docking station, but the decent ones are expensive AND they don&#8217;t support my old VGA monitors, so I went with the hubs option instead.</p>
<h3>Five Ports UGREEN USB-C hub</h3>
<p><img loading="lazy" decoding="async" class=" wp-image-714 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/ugreen.jpg" alt="UGreen USB-C hub" width="544" height="335" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/ugreen.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/05/ugreen-300x185.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/05/ugreen-768x473.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/05/ugreen-660x407.jpg 660w" sizes="(max-width: 544px) 100vw, 544px" /></p>
<p><span data-preserver-spaces="true">It&#8217;s funny that the hub is too old now that I couldn&#8217;t find it on their website <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /> to link it in this post, here is a picture for it.</span></p>
<p><span data-preserver-spaces="true">This hub takes:</span></p>
<ul>
<li class="ql-indent-1"><span data-preserver-spaces="true">One of the monitors</span></li>
<li class="ql-indent-1"><span data-preserver-spaces="true">an Ethernet cable (a must for streaming in my opinion)</span></li>
<li class="ql-indent-1"><span data-preserver-spaces="true">The microphone</span></li>
<li class="ql-indent-1"><span data-preserver-spaces="true">The mouse</span></li>
<li class="ql-indent-1"><span data-preserver-spaces="true">The camera</span></li>
</ul>
<h3>Generic USB-C VGA adapter</h3>
<p><img loading="lazy" decoding="async" class=" wp-image-715 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/vga-adapter.jpg" alt="vga-adapter" width="369" height="261" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/vga-adapter.jpg 800w, https://www.emadashi.com/wp-content/uploads/2020/05/vga-adapter-300x212.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/05/vga-adapter-768x542.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/05/vga-adapter-660x466.jpg 660w, https://www.emadashi.com/wp-content/uploads/2020/05/vga-adapter-200x140.jpg 200w" sizes="(max-width: 369px) 100vw, 369px" /></p>
<p>With extra USB-A peripheral to take my keyboard and the other monitor.</p>
<h3>key-chain USB-A to USB-C adapter</h3>
<p><img loading="lazy" decoding="async" class=" wp-image-716 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/keychain-adapter-1024x1024.jpg" alt="keychain-adapter" width="367" height="367" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/keychain-adapter-1024x1024.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/05/keychain-adapter-300x300.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/05/keychain-adapter-150x150.jpg 150w, https://www.emadashi.com/wp-content/uploads/2020/05/keychain-adapter-768x768.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/05/keychain-adapter-1536x1536.jpg 1536w, https://www.emadashi.com/wp-content/uploads/2020/05/keychain-adapter-2048x2048.jpg 2048w, https://www.emadashi.com/wp-content/uploads/2020/05/keychain-adapter-660x660.jpg 660w" sizes="(max-width: 367px) 100vw, 367px" /></p>
<p>I use this adapter to connect my USB Logitech headset. I don&#8217;t use this headset&#8217;s microphone, only the headphone. Having said that, I don&#8217;t really play any music or crazy sounds during the stream, so I don&#8217;t put the headset on my head most of the time.</p>
<h2>Studio Setup</h2>
<p><img loading="lazy" decoding="async" class="size-large wp-image-703 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/studio-left-1024x768.jpg" alt="Studio Left" width="665" height="499" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/studio-left-1024x768.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-left-300x225.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-left-768x576.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-left-1536x1152.jpg 1536w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-left-2048x1536.jpg 2048w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-left-660x495.jpg 660w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-704 alignnone" src="https://www.emadashi.com/wp-content/uploads/2020/05/studio-right-1024x768.jpg" alt="Studio Right" width="665" height="499" srcset="https://www.emadashi.com/wp-content/uploads/2020/05/studio-right-1024x768.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-right-300x225.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-right-768x576.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-right-1536x1152.jpg 1536w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-right-2048x1536.jpg 2048w, https://www.emadashi.com/wp-content/uploads/2020/05/studio-right-660x495.jpg 660w" sizes="(max-width: 665px) 100vw, 665px" /></p>
<p><span data-preserver-spaces="true">I was lucky enough to find a relatively cheap </span><a class="_e75a791d-denali-editor-page-rtfLink" href="https://www.ebay.com.au/itm/Studio-Photo-White-Black-Green-Screen-Backdrop-Light-Stand-Umbrella-Lighting-Kit/323518324905?ssPageName=STRK%3AMEBIDX%3AIT&amp;_trksid=p2057872.m2749.l2649" target="_blank" rel="noopener noreferrer"><span data-preserver-spaces="true">studio setup</span></a><span data-preserver-spaces="true"> on eBay. In preparation for this post I searched for the item on eBay and it seems the price has gone up :).</span></p>
<p><span data-preserver-spaces="true">The bundle had:</span></p>
<ul>
<li><span data-preserver-spaces="true">Green, White, and Black backdrops. The material is very poor, I am not sure what it is called, but I tried to iron it once and it almost melted. It works 99% of the time, but I noticed recently that the wrinkles confused my chroma key, and I wouldn&#8217;t get a perfect removal. The good thing is that it&#8217;s not too noticeable, and most of the time during my stream the focus is on the code scene.</span></li>
<li><span data-preserver-spaces="true">Big stand to hold the green backdrop, it consists of two extensible mounts and a 4-pieces rod to sit across them. Then you put the green backdrop on it and tighten it with clippers came with the bundle (a little bit of a hassle really).</span></li>
<li><span data-preserver-spaces="true">Two mounts to hold the lights bulbs</span></li>
<li><span data-preserver-spaces="true">Two 135W 5500K light bulbs</span></li>
<li><span data-preserver-spaces="true">Two white umbrellas</span></li>
<li><span data-preserver-spaces="true">And some accessories I don&#8217;t use (two black umbrellas and reflectors)</span></li>
</ul>
<p><span data-preserver-spaces="true">If time goes back, I would have changed the tools a little:</span></p>
<ul>
<li><span data-preserver-spaces="true">I would get the new shiny LED lighting that can be mounted to the desk behind the screens. There are many lighting setups like this but I think the newest one out there is the </span><a class="_e75a791d-denali-editor-page-rtfLink" href="https://www.elgato.com/en/gaming/key-light" target="_blank" rel="noopener noreferrer"><span data-preserver-spaces="true">Elgato Key Light</span></a><span data-preserver-spaces="true"> (<strong>Update</strong>: I have received mixed feedback about the quality of the Elgato Key Light, so this is NOT a recommendation. Please do your homework and assess before buying). The lightning itself is not a problem, but setting the lightning every time is just too tedious.</span></li>
<li><span data-preserver-spaces="true">I would get an easy-setup green screen, also Elgato has a </span><a class="_e75a791d-denali-editor-page-rtfLink" href="https://www.elgato.com/en/gaming/green-screen" target="_blank" rel="noopener noreferrer"><span data-preserver-spaces="true">collapsible green screen</span></a><span data-preserver-spaces="true"> that can be easily setup/taken off. For the same reason, setting this thing up and tearing it down just takes an uncomfortable time. In addition to the wrinkles problem above.</span></li>
</ul>
<h2>Surface Pro 4 (Not necessary)</h2>
<p>Sometimes during my stream, I&#8217;d like to explain something on a whiteboard, and using the mouse for that isn&#8217;t really natural. So I thought I can use my old Surface Pro 4 laptop. I tried at the beginning to use NDI to stream from two laptops, but it just wouldn&#8217;t work.</p>
<p>So instead I used the Microsoft Whiteboard app: I use my SP4 to draw on the whiteboard, and then connect to the same whiteboard from my Mac. There can be a small delay between drawing and appearing on the screen, but it wasn&#8217;t that much. However, this setup is a little tedious and I am thinking of alternatives.</p>
<h2>Summary</h2>
<p>So this is my <a href="https://twitch.tv/emadashi">Twitch</a> streaming setup. It&#8217;s worth mentioning that I didn&#8217;t get all of this setup at once, I accumulated it over time. I had the microphone first, then the camera, then the green screen and lightning&#8230;etc, and this was over many months.</p>
<p>You can also slice the budget even more if you choose lower end microphone, and a normal keyboard.</p>
<p>I hope this was beneficial, if you have any questions please let me know in the comments, or ping me on Twitter at <a href="https://twitter.com/emadashi">@emadashi</a>, would love to hear from you. Stay tuned for the two coming sections: Software and Humanware.</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/">My Twitch Streaming Setup &#8211; Part 1 Hardware</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2020/05/my-twitch-streaming-setup-part-1-hardware/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to Fix Minikube Invalid Profile</title>
		<link>https://www.emadashi.com/2020/03/how-to-fix-minikube-invalid-profile/</link>
					<comments>https://www.emadashi.com/2020/03/how-to-fix-minikube-invalid-profile/#respond</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Sun, 15 Mar 2020 14:01:56 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=689</guid>

					<description><![CDATA[<p>TDLR; Minikube recent version might not be able to read old profiles. In  this post we will see how to fix a Minikube invalid profile, at least how I did it in my case. minikube profile list, invalid profile Last Saturday, I had the privilege to speak at GIB Melbourne online, where I presented about… <span class="read-more"><a href="https://www.emadashi.com/2020/03/how-to-fix-minikube-invalid-profile/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2020/03/how-to-fix-minikube-invalid-profile/">How to Fix Minikube Invalid Profile</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>TDLR; Minikube recent version might not be able to read old profiles. In  this post we will see how to fix a Minikube invalid profile, at least how I did it in my case.</p>



<h2 class="wp-block-heading">minikube profile list, invalid profile</h2>



<p>Last Saturday, I had the privilege to speak at <a href="https://www.integrationbootcamp.com/">GIB Melbourne</a> online, where I presented about <a href="https://www.youtube.com/watch?v=tGMJOfa9ZT8">Self-hosted Azure API Management Gateway</a>. In the presentation, I needed to demonstrate using Minikube, and I spent couple of days preparing my cluster and making sure everything is good and ready.</p>



<p>One day before the presentation, Minikube suggested to me to upgrade to the latest version, and I thought: &#8220;what is the worst thing that can happen&#8221;, but then then responsible part of my brain begged me not to fall into this trap, and I stopped. Thank god I did!</p>



<p>After the presentation I decided to upgrade, so I upgraded to version 1.8.1 (I can&#8217;t remember the version I had before) but then none of my clusters worked!</p>



<p>When I try to list them using the command &#8220;minikube profile list&#8221; I find it listed under the invalid profiles</p>



<p>Oh this is not good! Was this update a breaking change that hinders my clusters unusable? Or is it that the new Minikube version doesn&#8217;t understand the old Profile configuration? And is the only way I am supposed to solve the problem is by deleting my clusters?! I am not happy.</p>



<h2 class="wp-block-heading">Can I fix the configs?</h2>



<p>Before I worry about breaking changes, let me check what a valid profile looks like in the new update, so I created a new cluster and compared the two profiles.  You can find a cluster&#8217;s profile in <em>.minikube/profiles/[ProfileName]/config.json</em>.</p>



<p>The following are the differences that I have noticed:</p>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" class="wp-image-686" src="https://www.emadashi.com/wp-content/uploads/2020/03/minikube-profile-comparison-1-1024x600.jpg" alt="comparison between the old and new minikube profile" width="992" height="580" srcset="https://www.emadashi.com/wp-content/uploads/2020/03/minikube-profile-comparison-1-1024x600.jpg 1024w, https://www.emadashi.com/wp-content/uploads/2020/03/minikube-profile-comparison-1-300x176.jpg 300w, https://www.emadashi.com/wp-content/uploads/2020/03/minikube-profile-comparison-1-768x450.jpg 768w, https://www.emadashi.com/wp-content/uploads/2020/03/minikube-profile-comparison-1-1536x900.jpg 1536w, https://www.emadashi.com/wp-content/uploads/2020/03/minikube-profile-comparison-1-2048x1199.jpg 2048w, https://www.emadashi.com/wp-content/uploads/2020/03/minikube-profile-comparison-1-660x387.jpg 660w" sizes="(max-width: 992px) 100vw, 992px" /></figure>



<ul>
<li>There is no &#8220;MachineConfig&#8221; node in the configuration, and that most of its properties are taken one level higher in the JSON path.</li>
<li>The &#8220;VMDriver&#8221; changed to &#8220;Driver&#8221;.</li>
<li>The &#8220;ContainerRuntime&#8221; property is removed.</li>
<li>There are about 4 properties introduced
<ul>
<li>HyperUseExternalSwitch</li>
<li>HypervExternalAdapter</li>
<li>HostOnlyNicType</li>
<li>NatNicType</li>
</ul>
</li>
<li>The &#8220;Nodes&#8221; collection is added, where each JSON node represents a Kubernetes cluster node. Each node has the following properties:
<ul>
<li>Name</li>
<li>IP</li>
<li>Port</li>
<li>KubernetesVersion</li>
<li>ControlPlane</li>
<li>Worker</li>
</ul>
</li>
<li>In the KubernetesConfig, the Node properties are moved to the newly created collection &#8220;Nodes&#8221; mentioned above:
<ul>
<li>&#8220;NodeIP&#8221; moved to &#8220;IP&#8221;</li>
<li>&#8220;NodePort&#8221; moved to &#8220;Port&#8221;</li>
<li>NodeName moved to Name</li>
<li>A new property ClusterName is added</li>
</ul>
</li>
</ul>



<h2 class="wp-block-heading">The Solution</h2>



<p>So what I did is that I changed the old profile format to match the new format, and set the new and different properties to the values that made most sense just like above. All was straight forward except for the Node IP address; It&#8217;s missing!</p>



<p>Digging a little deeper I found the IP address value (and other properties)  in the machine configuration &#8220;.minikube/machines/[clustername]/config.json&#8221;. I copied these values from there and then ran my cluster to be resurrected from the dead!</p>



<p>I would have loved if Minikube itself took care of fixing the configs rather than suggesting to delete the profiles. Or maybe that can be a Pull Request :).</p>



<p>I hope this helps.</p>
<p>The post <a href="https://www.emadashi.com/2020/03/how-to-fix-minikube-invalid-profile/">How to Fix Minikube Invalid Profile</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2020/03/how-to-fix-minikube-invalid-profile/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Posh-git on Mac using Oh My Zsh Themes</title>
		<link>https://www.emadashi.com/2019/10/posh-git-mac-using-oh-zsh-themes/</link>
					<comments>https://www.emadashi.com/2019/10/posh-git-mac-using-oh-zsh-themes/#respond</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Wed, 16 Oct 2019 10:30:07 +0000</pubDate>
				<category><![CDATA[Development]]></category>
		<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=667</guid>

					<description><![CDATA[<p>This post explains how to have posh-git prompt style in Oh My Zsh theme on Mac. After 4 years of using Windows, I am coming back to using a Mac. And there are so many things in Windows I am missing already. One of these things is posh-git; I loved how in one glance to… <span class="read-more"><a href="https://www.emadashi.com/2019/10/posh-git-mac-using-oh-zsh-themes/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2019/10/posh-git-mac-using-oh-zsh-themes/">Posh-git on Mac using Oh My Zsh Themes</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This post explains how to have posh-git prompt style in Oh My Zsh theme on Mac.</p>
<p>After 4 years of using Windows, I am coming back to using a Mac. And there are <a href="https://twitter.com/EmadAshi/status/1183642418765168640?s=20">so many things</a> in Windows I am missing already. One of these things is <a href="https://github.com/dahlbyk/posh-git">posh-git</a>; I loved how in one glance to your prompt you know the status of your git repo: how many files changed, how many added, how many deleted, how many indexed&#8230; just love it!</p>
<p>Once I moved to Mac, I changed my shell to use zsh using <a href="https://github.com/robbyrussell/oh-my-zsh">Oh My Zsh</a> due to the rich experience it brings to the terminal. I was delighted to see all these <a href="https://github.com/robbyrussell/oh-my-zsh/tree/master/themes">themes</a> and plugins, and then started looking for a theme that provided the same information posh-git prompt provided. To my surprise, there was none! So I started my quest to see how I can change zsh, the theme, or the plugin to have such prompt.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-675 size-full" src="https://www.emadashi.com/wp-content/uploads/2019/10/PromptDefaultLong.png" alt="A posh-git prompt that shows the number of files index and changed." width="418" height="22" srcset="https://www.emadashi.com/wp-content/uploads/2019/10/PromptDefaultLong.png 418w, https://www.emadashi.com/wp-content/uploads/2019/10/PromptDefaultLong-300x16.png 300w" sizes="(max-width: 418px) 100vw, 418px" /></p>
<p>Being lazy, I wanted change an existing theme I like with the least amount of investment. I looked in the documentation to see how I could do that, and found the <a href="https://github.com/robbyrussell/oh-my-zsh/wiki/Customization">customisation wiki page</a>:</p>
<h2>Should I override the theme?</h2>
<p><a href="https://github.com/robbyrussell/oh-my-zsh/wiki/Customization#overriding-and-adding-themes">Overriding the theme</a> seemed to be the perfect solution, however, there were couple of drawbacks:</p>
<ul>
<li>When you override a theme, you override the theme, period! This means that if the author changes something after you have overridden it, you will not get these new changes.</li>
<li>It was a little bit too much for me to grasp! When I looked at <a href="https://github.com/robbyrussell/oh-my-zsh/blob/master/themes/avit.zsh-theme">avit theme</a> as an example, I had questions like what is PROMPT and PROMPT2? What are all these special characters? Where is the reference/documentation to all of these? Are they theme-specific, or are they part of zsh theme reference?</li>
</ul>
<p>Remember I wanted to put the least amount of effort, and I surely didn&#8217;t want to learn the whole thing! But while looking into avit theme, one thing grasped my attention: there was a clear reference to what seemed to be like a function <em><strong>git_prompt_info</strong></em>. And I thought this should be it, if I could find where this function is and how to override it.</p>
<p>To my luck, it was <a href="https://github.com/robbyrussell/oh-my-zsh/wiki/Customization#overriding-internals">mentioned</a> as an example in the customisation wiki page as an example!</p>
<h2>Override the internals it is!</h2>
<p>Ok great, now I know that I can customise <strong><em>git_prompt_info</em></strong>, all what I need is to mimic whatever posh-git does in that function!</p>
<p>So I hit <del>google</del> duckduckgo again on the hope that someone already did this, and oh my! I found that <a href="https://github.com/lyze/posh-git-sh/">there is already</a> a port of it on bash. That&#8217;s great, now what should I do? Replace the call of <strong><em>prompt_git_info</em></strong> in the theme with a call to <strong><em>__posh_git_ps1</em></strong>? Or should I call it from <em><strong>prompt_git_info</strong></em>? Since <em><strong>prompt_git_info</strong></em> is an internal lib function, it is probably used in many themes, thus it will make sense to just call <em><strong>__posh_git_pst</strong></em> from within. And to my good surprise, there is a <a href="https://github.com/lyze/posh-git-sh/issues/14">GitHub issue</a> in the posh-git-bash repo that discusses integrating with zsh, it&#8217;s even referenced in the main README.md file of the repo.</p>
<p>Initially I mistakenly called the <em><strong>__posh_git_ps1</strong></em> function, but I soon realised that I need to print (echo) the git info just like <em><strong>prompt_git_info</strong></em> did rather than changing any variables, for that I should use the <em><strong>__posh_git_echo</strong></em>.</p>
<p>And thus I ended up with a file called <em><strong>emad-git-prompt.zsh</strong></em> under the path <strong><em>~/.oh-my-zsh/custom</em></strong> with the content of posh-git-bash <a href="https://raw.githubusercontent.com/lyze/posh-git-sh/master/git-prompt.sh">here</a>, and at the end of the file I wrote the following code:</p>
<pre class="brush: bash; title: ; notranslate">
git_prompt_info () {

__posh_git_echo

}
</pre>
<p>I hope this helps you <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>The post <a href="https://www.emadashi.com/2019/10/posh-git-mac-using-oh-zsh-themes/">Posh-git on Mac using Oh My Zsh Themes</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2019/10/posh-git-mac-using-oh-zsh-themes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Learning  a New Programming Language (Go language as an Example)</title>
		<link>https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/</link>
					<comments>https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/#comments</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Mon, 26 Aug 2019 13:38:25 +0000</pubDate>
				<category><![CDATA[Development]]></category>
		<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=650</guid>

					<description><![CDATA[<p>Summary This post explains why and how I learned the Go language. Hopefully this will help you to learn it quickly, or will inspire you on how to learn new languages. The Reason to Learn a New Language There can be many reasons why someone would want to learn a new language, the main ones… <span class="read-more"><a href="https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/">Learning  a New Programming Language (Go language as an Example)</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>Summary</h2>
<p>This post explains why and how I learned the Go language. Hopefully this will help you to learn it quickly, or will inspire you on how to learn new languages.</p>
<h2>The Reason to Learn a New Language</h2>
<p>There can be many reasons why someone would want to learn a new language, the main ones to me are: 1) To solve a current business problem 2) Learn concepts to adapt to current tools 3) For fun and passion. Of course, you can have a mix of these reasons to push you to learn a new language, or maybe just one strong enough of these reasons.</p>
<p>For a very long time in my career, C# was my main programming language, I used JavaScript a lot too, but it has always taken a back seat until TypeScript came about, and SPA became the de facto front-end development model. So for 16 years, it has been two languages and a half for me, and I have never felt the need to learn another language (Java in university doesn&#8217;t count).</p>
<h3>Why not Haskell or F#?</h3>
<p>When functional programming became a thing again, I tried to find the right reason to learn F# (or Haskell), but with the explosion of technical information in our industry, time became even more scarce (I have three kids under 5!) and I really needed a stronger reason to spend my time learning a new language. Unfortunately, even with <a href="https://twitter.com/danielchambers">@DanielChambers</a> continuous efforts in converting me :P, I didn&#8217;t jump to the wagon.</p>
<p>It&#8217;s funny that the reason why I couldn&#8217;t put the effort was exactly the reason why functional programming itself is compelling; it&#8217;s the paradigm shift. The paradigm shift was so big that organisations in the I spend most of my time helping couldn&#8217;t afford to embrace it; 20+ years of OOP meant a lot of investment in education, solutions and patterns, frameworks, and staffing that made it hard to embrace such a change.</p>
<p>In my experience with these organisations, there might have been situations where functional languages could have solved a problem better than an OOP one, but the return of investment would have been little in the light of the legacy of these organisation.<br />
Of course, I am not promoting that organisations should not invest in learning and adopting new technologies; that would be the path to failure! But I&#8217;m just describing the situation of most of the organisation I worked with.</p>
<p>This ruled out the business-need reason for me, and I am left with &#8220;learning concepts to adapt to current tools&#8221; since passion was not just enough :P. Luckily, I am surrounded by friends who are passionate about functional programming, and I managed to learn from them enough about its benefits and how to bring that to my OOP world. Conversations with these friends and colleagues like Daniel Chambers, Thomas Koster, and <a href="https://www.youtube.com/watch?v=aZCzG2I8Hds">attending lectures by professionals like Joe Bahari</a>, have helped me a lot in adopting functional concepts to my C#.</p>
<h2>I Found The Reasons in Go</h2>
<p><img loading="lazy" decoding="async" class="alignright size-full wp-image-662" src="https://www.emadashi.com/wp-content/uploads/2019/08/Gopher.png" alt="Gopher" width="344" height="382" srcset="https://www.emadashi.com/wp-content/uploads/2019/08/Gopher.png 344w, https://www.emadashi.com/wp-content/uploads/2019/08/Gopher-270x300.png 270w" sizes="(max-width: 344px) 100vw, 344px" /></p>
<p>So I stayed on two languages and a half, until last year when I got the chance to work on a project in which we used <a href="https://kubernetes.io/">Kubernetes</a>. Once you step in the Kubernetes world you will realise that <a href="http://golang.org/">Go </a>is the hero language; Kubernetes is written in Go, <a href="https://helm.sh/">Helm </a>is written in Go, and the templates Helm uses is based on the Go template engine. Although you can use Kubernetes without learning the Go language, once you want to get a little deeper it feels that learning Go would be an advantage.</p>
<p>In addition to that, with Cloud being my main interest, I have been seeing Go used more and more as the language of choice for many of the cloud-native project, products and services.</p>
<p>During the same time, many of my colleagues and Twitter friends have been porting their blogs from database-driven engines like WordPress to static website generators like Jekyll. I have two websites that could benefit from that, 1) my blog <a href="https://emadashi.com">emadashi.com</a> 2) and <a href="https://dotnetarabi.com">dotnetarabi.com</a> podcast, which I built on ASP.NET and Subsonic ORM of Rob Conery&#8217;s. My friend <a href="http://yashints">Yaser Mehraban</a> kept teasing me and applying his peer pressure until I surrendered, and I finally started looking into it moving my blog and my podcast to a static website generator.</p>
<p>My choice was <a href="https://gohugo.io">Hugo</a>; to me, it seemed the most mature static site generator with the least amount of churn and learning curve. And guess what, Hugo is written in Go! And the templating engine is based on Go&#8217;s. Same as Kubernetes, you don&#8217;t need to learn Go if you want to use Hugo, but it&#8217;s just another compelling reason to be familiar with the language.</p>
<p>So by now, it feels I am surrounded by problems that are being solved with Go, and it&#8217;s more evident that there is a greater possibility for me to work in Go in the future, even professionally.</p>
<p>All this, in addition to the low barrier of entry due to familiarity with C#, encouraged me to jump to the waters.</p>
<h2>Where did I Start?</h2>
<p>There are so many ways a person can start learning a language, to me I wanted to learn the language fast and learn just enough to get me going. For this reason, I didn&#8217;t pick up a book that would take me a while to learn, even though a book is probably the most profound way.</p>
<p>Instead of picking up a book, I went to <a href="https://golang.org">https://golang.org</a> and checked what the website has to offer; most of the modern projects and languages have documentation that includes tutorials and Getting Started guide. If these guides are well crafted it would be a great learning boost, and to my luck Go had great content.</p>
<h3>Set-up</h3>
<p>The first thing I wanted to do is to set up the environment and run the most basic example (the hello world of Go), for that I followed the <a href="https://golang.org/doc/install">Getting Started</a> guide. Setting up the environment as a basic step for learning a language is very important; it will give you an understanding of the requirements of the language and will set up some expectation on how friendly the experience is to you, it breaks the ice. Also, it paves the way to the Hands-On step coming soon; I will explain this step later in this article.</p>
<h3>Foundation</h3>
<p>Now that my environment is setup and I ran my hello world example, I needed to understand what is really going on: how the code compiles, how it runs, how it is packaged, how it is hosted; I needed the foundational concepts to establish a firm ground to base my learning on. Learning the syntax and the various Go features will come along, and it will take time, but you can&#8217;t postpone the foundations. For this, I followed the &#8220;<a href="https://golang.org/doc/code.html">How to Write Go Code</a>&#8221; guide. The article&#8217;s title might not sound too foundational, but the content lays the concepts.</p>
<h3>Cruise as you need</h3>
<p>If this is NOT your first programming language to learn, then you are already familiar with the concepts of structure flow: functions, loops, if clauses,&#8230;etc. This gives you a very good advantage to sweep through these swiftly; it&#8217;s unlikely that these are too different from other languages. A fast run through should be enough to capture anything standing out.</p>
<p>For this I used the <a href="https://tour.golang.org/welcome/1">Tour</a>; there are two great things about the tour: 1) it has a simple navigatable structure 2) it is associated with an online playground where you can experiment and confirm your understanding on the spot. There is a wide range of topics covered in the Tour, some of which I would go through fast, and some I would take my time to comprehend; e.g. Slices can be little confusing compared to arrays in C#.</p>
<p><strong>Note</strong>: Everyone&#8217;s experience is different, so it will not make sense to list the topics I went through swiftly and the ones I spent time on, use your own experience to judge that for yourself.</p>
<p>As for the advanced topics I left out a little until I had a better grasp on the basics of the language; overwhelming yourself with advanced topics at this stage might have a counter effect on your learning.</p>
<h3>Hands-On</h3>
<p>After understanding the basics from the How to Write Go code, and sweeping through the Tour, it was time to have my hands on the language; this is the only way you can really understand and learn a language.</p>
<p>I needed a problem to solve so I can have a driving purpose. The problem I chose is to import the existing records of DotNetArabi from the database (guests and episodes) to create corresponding Markdown files for the Hugo website, so <a href="https://github.com/eashi/dna-hugo-migration/">this was my first program</a>.</p>
<p>It&#8217;s important to understand here that I wasn&#8217;t 100% on top of things yet (neither now :P), but it was the practical experience that I relied on to grasp the concepts and gain the experience. If you leave the practical side for too long you will find yourself forgetting the basics, or that you are learning too advanced topics that you will rarely use. An iterative approach is very good here.</p>
<p>So I gradually built the application; each time I am stuck I&#8217;d either refer back to the Tour, or google it if it is not covered there (e.g. connecting to a database). In each of these stuck-and-solved situations, I take a moment to make sure I understand the solution and the technique behind it. Copy and paste is absolutely fine as long as you pause and comprehend.</p>
<h3>Advanced Topics</h3>
<p>Ok now at this stage I feel like I know the basics, and I am comfortable writing a program without big issues. But at this stage, writing a program in Go would give me very little advantage (if any) over writing it in another language; I am not getting the best out of the language. It&#8217;s the advanced features that make the difference, things like <a href="https://tour.golang.org/concurrency/1">goroutines and channels</a> by which we achieve concurrency with the least amount of maintenance overhead.</p>
<p>Don&#8217;t be afraid of the advanced topics; avoiding it the advanced topic because they might be complicated will jeopardise the value we are getting from learning a language in the first place!</p>
<p>So for this, I continued the Tour above for the advanced topics. The playground was of tremendous value as you will need to change things around to confirm your understanding. Also, the Tour has some exercises that will poke your thoughts, I highly advise trying these out! This will not just push you to comprehend the concepts, but it will also expand your horizons for the use cases that you might need these advanced features.</p>
<p>It would be great fun and value if you can go back to your pet project and try to implement some of these advanced concepts, and this is what I did. I went back to my application and utilised goroutines to extract the data to the markdown files.</p>
<h3>Unit Testing</h3>
<p>Leaving unit tests to the end wasn&#8217;t undermining their value, rather I wanted to focus on the language itself first; test frameworks and push the complexity and the learning curve high enough. My experience from JavaScript stings until now :P.</p>
<h3>The Best of Go</h3>
<p>Finally, Go website has a section called &#8220;<a href="https://golang.org/doc/effective_go.html">Effective Go</a>&#8220;. This section is not really a referential documentation, but it can be very valuable so that you write the Go code as the language has intended it to be like. It provides further context and rounded styling to writing the language in the best form.</p>
<p>I also here advise to pick and choose the topics, reading the whole thing might be counter-productive.</p>
<h2>Close the Loop, Complete the Picture</h2>
<p>By now you&#8217;d think you finished, but this is just the beginning; now is the time to tie things together by revising the language&#8217;s main characteristics, philosophy, and the greatest advantages.</p>
<p>If we look specifically at Go, as our example, this might be things like the simplicity of Go, where there no classes, no inheritance, or generics. Or things like concurrency and how Go deals with State in asynchronous code execution. At this stage, it will be valuable to check the videos, like <a href="https://www.youtube.com/watch?v=5bYO60-qYOI">Sameer Ajmani&#8217;s talk</a>, and the literature out there that discuss &#8220;<a href="https://gist.github.com/ungerik/3731476">Why Go</a>&#8220;.</p>
<p>I also found the <a href="https://golang.org/doc/faq">FAQ </a>in golang.org a valuable resource for some of the justifications and explanations. You should not read this as an article though, pick and choose the topics of interest.</p>
<p>But isn&#8217;t this backward? Shouldn&#8217;t I learn about these things at the beginning? True, you can learn these at the beginning, but you will not value the claims until you try and put your hands on the problem in practice, until then it will be merely claims in the air. So even if you start with these, you should also revise them and make sure you tie the loop.</p>
<h2>Conclusion</h2>
<p>In my journey to learn Go, I did the following:<br />
• I had a good reason<br />
• I established the core concepts<br />
• I installed the tools and ran the &#8220;hello world&#8221; program<br />
• I scanned through the structure flow<br />
• I put my hands on the code and wrote the first program<br />
• Read the advanced topics, and used the playground to confirm my understanding<br />
• Watched more videos on why to use Go and its advantages</p>
<p>It&#8217;s important to say here that choosing a language to adopt for in an organisation involves more than just learning it. If you are in a position to influence a decision just be mindful of that.</p>
<p>I hopes this helps you out, enjoying coding :).</p>
<p>The post <a href="https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/">Learning  a New Programming Language (Go language as an Example)</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2019/08/learning-a-new-programming-language-golang-as-an-example/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>RBAC in Azure Kubernetes Service AKS on Twitch!</title>
		<link>https://www.emadashi.com/2019/03/rbac-azure-kubernetes-service-aks-twitch/</link>
					<comments>https://www.emadashi.com/2019/03/rbac-azure-kubernetes-service-aks-twitch/#comments</comments>
		
		<dc:creator><![CDATA[Emad Alashi]]></dc:creator>
		<pubDate>Mon, 18 Mar 2019 23:06:51 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<category><![CDATA[live coding]]></category>
		<category><![CDATA[live streaming]]></category>
		<category><![CDATA[Presentation]]></category>
		<category><![CDATA[twitch]]></category>
		<category><![CDATA[video]]></category>
		<category><![CDATA[youtube]]></category>
		<guid isPermaLink="false">http://www.emadashi.com/?p=632</guid>

					<description><![CDATA[<p>tldr; I will be streaming on Twitch next Monday (25th of March) at 8:30 Melbourne time (GMT+11), configuring Azure Kubernetes AKS to use RBAC. For a long while, I&#8217;ve been thinking about streaming live development to Twitch or YouTube. Having spent some time behind the microphone while making DotNetArabi podcast, I can say there is… <span class="read-more"><a href="https://www.emadashi.com/2019/03/rbac-azure-kubernetes-service-aks-twitch/">Read More &#187;</a></span></p>
<p>The post <a href="https://www.emadashi.com/2019/03/rbac-azure-kubernetes-service-aks-twitch/">RBAC in Azure Kubernetes Service AKS on Twitch!</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><em>tldr; I will be streaming on Twitch<a href="https://www.twitch.tv/events/P59QGEbtTRahQ-4_T3bG-A"> next Monday (25th of March) at 8:30 Melbourne time (GMT+11)</a>, configuring Azure Kubernetes AKS to use RBAC.</em></p>
<p><img loading="lazy" decoding="async" class="alignright wp-image-639" src="https://www.emadashi.com/wp-content/uploads/2019/03/twitch-logo.jpg" alt="Twitch logo" width="280" height="157" srcset="https://www.emadashi.com/wp-content/uploads/2019/03/twitch-logo.jpg 474w, https://www.emadashi.com/wp-content/uploads/2019/03/twitch-logo-300x168.jpg 300w" sizes="(max-width: 280px) 100vw, 280px" /></p>
<p>For a long while, I&#8217;ve been thinking about streaming live development to <a href="https://twitch.tv">Twitch</a> or <a href="https://youtube.com">YouTube</a>. Having spent some time behind the microphone while making <a href="https://dotnetarabi.com/">DotNetArabi </a>podcast, I can say there is a satisfiying feeling in producing content in a media format through which you can connect with the audience.</p>
<h2>Why not just offline video?</h2>
<p>I could just record an offline video and host it on YouTube, and it&#8217;s definitely a valuable medium. The problem with educational videos, specifically, is that it is a one-way communication channel, and without the entertainment factor, unlike movies, these videos can be daunting, imprisoning, and hard to follow.</p>
<h2>The magic of live streaming</h2>
<p>But with live streaming magic happens; it adds additional dimensions that make it more appealing:</p>
<ol>
<li><strong>It&#8217;s LIVE!</strong> It&#8217;s happening NOW, and this means couple of things: it implicitly has the anticipation factor; things are still happening and it might take interesting turns, just like live sports. In addition to that, by sharing the time span during which the event is happening, the audience gets the feeling of involvement and &#8220;I was there when it happened&#8221;, even if the audience didn&#8217;t directly interact with the broadcaster.</li>
<li><strong>It&#8217;s real and revealing</strong>: When I was doing my homework preparing for this, I talked to my colleague Thomas Koster, and when I asked him about what could interest him in live streaming, his answer was:<br />
<blockquote>
<div>&#8230;it&#8217;s probably more the real time nature of it that appeals &#8211; to see somebody&#8217;s thought processes in action, as long as the broadcaster doesn&#8217;t waste too much time going around in circles.</div>
<div></div>
<div>For example, watching somebody figure out a puzzle solution in the game The Witness in real time is much more interesting and valuable than watching a rehearsed, prepared performance of only the final solution.</div>
</blockquote>
<p>This is the ultimate stage for a developer broadcaster; it requires a lot of bravery and experience. I&#8217;d love to be able to do this soon, but it&#8217;s really the 3rd reason below that drew me to streaming.</li>
<li><strong>It&#8217;s two-way communication</strong>: the interactive communication between the broadcaster and the audience brings the video to life. It provides timely opportunity to get the best out of this communication, whether it was by the audience correcting the broadcaster, or the broadcaster being available for immediate inquiries.</li>
</ol>
<p>Specifically for this last reason, I became interested in live streaming; I want this relation with my audience; to have a collaborative experience where value is coming from everyone and going in all directions.</p>
<h2>So, I am doing my first stream!</h2>
<p>I have been following Jeff Fritz <a href="https://twitter.com/csharpfritz">@csharpfritz</a> and Suz Hinton <a href="https://twitter.com/noopkat">@noopkat</a> and greatly inspired by their amazing work! Also <a href="https://twitter.com/GeoffreyHuntley">@geoffreyhuntley</a> have started his journey and gave me the last nudge to jump into this space. I&#8217;ve learned a lot from Suz&#8217;s post &#8220;<a href="http://meow.noopkat.com/lessons-from-one-year-of-streaming-on-twitch/"><em>Lessons from my first year of live coding on Twitch</em></a>&#8220;, and recently Jeff&#8217;s &#8220;<a href="https://jeffreyfritz.com/2019/01/live-streaming-setup-2019-edition/"><em>Live Streaming Setup – 2019 Edition</em></a>&#8221; (don&#8217;t let it scare you,  you don&#8217;t have to do it all!).</p>
<p>My next stream will be about <a href="https://www.twitch.tv/events/P59QGEbtTRahQ-4_T3bG-A">Role Based Access Control (RBAC) in Azure Kubernetes AKS</a>, I will walk you through RBAC, OAuth2 Device Flow, and how this works within Azure AKS, with hands-on live deployments and configuration.</p>
<h2>What is my goal, and what is not?</h2>
<p>What I am trying to achieve here is two-way communication through the session I have with my audience, that&#8217;s it.</p>
<h2>Am I going to do this constantly now?</h2>
<p>Actually, I don&#8217;t know! To me this is an experiment; I might keep doing it, or this might be my first AND LAST stream, let&#8217;s see what the future brings. <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>The post <a href="https://www.emadashi.com/2019/03/rbac-azure-kubernetes-service-aks-twitch/">RBAC in Azure Kubernetes Service AKS on Twitch!</a> appeared first on <a href="https://www.emadashi.com">Emad Alashi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.emadashi.com/2019/03/rbac-azure-kubernetes-service-aks-twitch/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
