Author Archives: Emad Alashi

“Azure Functions on Kubernetes” Talk With Integration Down Under Meetup

Earlier this month I was invited to talk about Azure Functions on Kubernetes at the Integration Down Under meetup. It’s an amazing meet up held by highly regarded professionals like from all around Australia.

Below is the recording of the session, make sure to follow their channel because they post regularly. Also all feedback is welcome :).

My Twitch Streaming Setup – Part 2 Software

This is part 2 of the three-parts blog post about my Twitch streaming setup.

  • Hardware
  • Software (this post)
  • Humanware

As I have said before, I have learned a lot from amazing streamers like @noopkat and @csharpfritz so you will find a lot of this content matches theirs.

OBS Streaming Configuration

OBS is the main software that streams to the streaming service (Twitch in my case). I used Streamlabs at the beginning, but because it’s just an abstraction over OBS, I faced some limitations when I wanted to try different plugins. So I preferred to play directly with the OBS itself.

Media is not really my expertise, and I don’t like to stray away from the default configuration OBS has for streaming, so I let OBS do the test and suggest the best configuration. The following is the suggested configuration:

  • Output
    • Video Bitrate: 2500 Kbps
    • Encoder: Software (x264)
    • Audio Bitrate: 160
  • Video:
    • Base (Canvas) Resolution: 1920×1080
    • Output (scaled) Resolution: 1280×720 (I am not sure if this is the best configuration, but this is what I am using now 🤷🏻‍♂️).
    • Common FPS Values: 60
  • Audio:
    • Sample Rate: 44.1 kHz
    • Channels: Stereo

OBS Scene Setup

In OBS, the way you construct the scene is by creating layers of Sources, each Source can be an image, web page, video…etc. It needs a little bit of time getting used to, but you can check their website for more detailed guides.

Below is the scenes setup I use, and here it is in an exported JSON format.

The Starting Soon scene

Starting Soon

When it’s time for my stream to start, I don’t start the meaty content of the stream instantly. Instead, I display “starting soon” scene to give a chance for the people to join it, this only lasts for a couple of minutes not more.

In this scene, I only show a small video in a loop, without exposing my microphone audio. Some, however, make it really cool like @csharpfrits and @BaldBeardedBuilder displaying their shadows as they move to prepare for the stream.

The Me Talking scene

Me Talking scene

After displaying the Starting Soon scene for about two minutes, I bring the Me Talking scene up. It’s a focus on my face with a display of the chat overlay next to it. In the scene, I also display my twitter handle for people who don’t know me, they can instantly look me up on Twitter (I stole this from @davidwengier :D).

I use this scene to establish a connection with the viewers, without them being distracted by code. It feels more direct and clear. I usually do a recap here, I explain what we went through in the last session, and what is our plan for that day’s session.

In these scenes, I utilise my greens screen. My office is not the best office in the world, and having a green screen means I can put a soothing blue background. I haven’t gone crazy with my backgrounds but will experiment with this in the future.

The biggest trick in this one is to remember to switch to the Code scene; it happened twice when I jumped into the coding part while the scene displayed was still at my big face only, no code, *facepalm*! There are some plugins that allow automatic switch, but I haven’t checked them out.

I also display the chat-box using the Chat-box extension from Streamlabs. I display this on all my scenes (ops! except for the Secret scene below, I just remembered while typing this 😅). The reason why I display the chat is that Twitch doesn’t allow you to keep the videos as an archive, so I offload them to YouTube. Once the video is on YouTube, there will be no capture for the chat unless I record it part of the video.

I know if I stream on YouTube directly, the chat messages will be replayed alongside the recording of the stream, I think this is a beautiful feature. I might consider broadcasting on YouTube in the future, but I’m focusing on one platform for now.

The Coding scene

Coding Scene

The coding scene is the one I show most of the stream. I have a part (on the left) where I show the code editor/desktop/web pages, a part I where show the chat messages (top right), and a part I show my camera (bottom right).

You will notice that I totally separate the chat and the camera from the code editor, unlike other streamers who show their cameras and chat on top of the code editor. My main reason for this setup is that in rare cases I have to show something at the right corner of the screen, and my camera or the chat will obstruct it otherwise.

The Secret scene

Secret Scene

In this scene, I show a funny video in a loop for people juggling knives. I show the video, my camera, and my audio.

Sometimes I need to display secret tokens or passwords on my screen. I have three monitors but I stream one of them only, so I could just simply move my editor to the other screen. However, moving the windows around is a little bit tricky because they don’t fill my window (check the “The Main Monitor” section below for why). Thus, I decided to have this Secret scene, it’s also funny 😀

I have streamed before writing a Visual Studio Code extension that hides YAML nodes that are secrets, but it uses regular expressions, and it was never published. My next stream by god willing will be a new extension that is built better using proper YAML parser (did I just say “build it again but better”?! we developers never change!).

Audio

I use Soundflower to create the right audio setup. It is necessary to convert the audio output of your machine to be another audio source for the stream. Here is a video on YouTube I found useful on how to set it up.

Audio Setup

I use my H2N microphone mentioned in part 1 as the main microphone. Music is too distracting to me during coding, so I don’t play music at all.

The Pilot’s View of the Setup

This is how is my view when I am streaming, I have two external monitors and the laptop’s screen:

The Main Monitor

This is where I write the code, and you can see that I have left some room on the right to fit the chat overlay so that it’s captured with the video, this will allow the chat to be a permanent part of the recording so that people who watch the video later on a different medium can relate to my comments on the chat.

Coding View

It needed some time to get used to because I am usually a full-screen guy, but after awhile you just don’t see the void.

The OBS monitor

OBS Coding View

This is where I leave my OBS open, and where I click to switch scenes whenever I need to.

I also use it sometimes to read the chat from the chat overlay. This is not optimal as I often find myself squeezing my eye to read the small font. And because it’s away from the camera, it seems I “look away” from the audience to read what they are saying.

The only good thing about reading the messages from OBS itself is that I am reading from the same view source the audience read from. This way, if for some reason the overlay is not working I wouldn’t be reading something the audience is not seeing.

In my future streams, I will grab the link of the chat overlay and put it in a browser, and then squeeze that browser window next to my VSCode. This way I won’t look away from the audience to read their messages. (I tried this before when I was proof-reading this post, didn’t work because the chat overlay had a minimum width :(. Will update you when I find a better solution).

The laptop monitor

Twitch Screen

I use this screen as an auxiliary monitor if required, recently I start opening the native Twitch app to see how things are going on the other streaming end. (I have no idea why at that time Twitch App was complaining about the internet :D).

AppleScript:

I have been a Windows user for so man years of my life, I wish there is an equivalent for AppleScript in Windows (Maybe there is, let me know if there is). It automates many aspects of the MacOS: opening applications, resizing windows, changing settings, and a lot more.

I got this tip from @noopkat (like many other tips). Just before I start my streaming session I run my script and it opens OBS, opens a clean browser window, resizes my VSCode and place it in the right position…everything! You can have a look at it here

Visual Studio Code

After .NET Core, and MacBook becoming my main PC, my main IDE has become Visual Studio Code. I love its reliability, lightness, and extensibility.

There is a feature in VSCode called “ScreenCast Mode” (thanks @ShahidDev for the tip), it prints the keys pressed down on the keyboard. This is a very good way to share with the audience the love of shortcuts.

Screencast Mode

There are tools that work on the OS level, but I haven’t used it yet. I have my eye on Keycaster, haven’t tried it yet though.

Zsh as terminal

I can’t really speak a lot about this one, it’s a feature-rich terminal that is also extensible, however, I haven’t really utilised a lot of its features. I hope @tkoster is not disappointed as he is my guide in anything related to Unix and Linux!

Twitch channel setup

Truly, Twitch does NOT have the best UX for their platform, it’s confusing to say the least. However, they are constantly changing things around and try their best to improve.

The customisable page in your channel (at least at the time of writing these words) is the About page, you can add panels and custom content. Below are the panels I have.

Twitch Panels

About Me panel

This where the bio goes, I like to use the first singular on Twitch because it’s a direct conversation with the audience, it makes sense to say “hi, welcome to my channel” than “Emad is a developer…”.

After I had this panel for a while, Twitch decided to display the About Me info you are in your profile in the About page, this somehow made this panel is redundant. However, the About Me in the profile is limited to 300 characters, this one you can write more, hence I renamed mine to “More About Me”!

Stream Schedule panel

This is an extension from Streamlabs to show the visitors when your next stream is. Twitch has introduced their own Schedule page in the channel, so this one might be redundant.

I am not sure if this is really benefiting the audience though, at some point I was thinking of creating my own Google calendar for the streaming and let people subscribe to it so they know when I cancel one. I might still do it so stay tuned, and let me know if you would like that too.

Twitter feed panel

There is nothing like Twitter to tell people who you really are (not really, it’s only 140 characters of text!). This extenstion provides a list of your most recent tweets.

Chat Bot

Up until writing this post, I didn’t have a chat bot. But in the last stream one of the audience sent “!theme”, and I didn’t have a bot to answer, so I stopped whatever I was doing and started showing my theme. If I had a chat bot setup this wouldn’t have happened.

The next step for after posting this is to set up a new Bot. There are so many bots out there, this video shows several of them. I am already leaning towards nightbot, but I also know that @chahrpfritz has been working on one, so let’s see how it goes, probably will be another post so stay tuned or ping me if I don’t write about it ;).

Summary

As you can see, this is a space of a continuous change; you will find yourself keep changing setups, and tweaking settings here and there until you find the final setup, which soon will change after :D.

I hope this was beneficial, let me know if you need more information.

My Twitch Streaming Setup – Part 1 Hardware

There has a been a lot of interest lately about home studio setup for streaming and recording videos. In this post I will explain my Twitch streaming setup, and my experience thus far.It’s been less than a year for my journey in streaming on Twitch, and I am still learning and trying things out, so take these posts with this context.

While writing this post, I realised that it’s going to be a long one, so I will break it down into three posts:

1. Hardware (this article)
2. Software
3. Humanware

Laptop

At the beginning of my journey, I streamed from a Surface Pro 4 (Intel Core i7-6650U 2.2 to 3.4 GHz and 16G RAM). This worked fine when I streamed while working on pure Azure tasks that didn’t involve any CPU consumption. But when I started doing local development and compiling code, my frames started dropping and my audience started complaining about the quality of my stream.

So when it was time for my Toolkit Allowance renewal (thanks Telstra Purple!), I decided to bump to the best machine I could afford. I read many confusing opinions on the internet about the role of GPU in a stream, and I couldn’t decide whether to get a good GPU machine or good CPU machine. Since I don’t buy a machine every day, and I had some space to bump the budget in addition to the allowance, I decided to get both, GPU AND CPU :).

Now I stream from a Macbook Pro (32G RAM 2.3 GHz 8-Core Intel Core i9, Radeon Pro Vega 20 4 GB).
The strongest voice I heard on the internet was that you want to concentrate on the CPU, but I will leave this homework for you. Needless to say, I don’t have a problem compiling, streaming, and recording at the same time now.

Microphone

microphone

Long before I got into streaming I started a podcast, and for some time I was looking for the right microphone, I needed a microphone that was of good audio quality AND has the functionality of recording in case I was on the go, and I found these in the Zoom H2N. I call this the awesome microphone, but I also believe that this is an overkill for most people who would like to start streaming or producing professional video content.

Microphone Mount

I got an unbranded microphone mount from eBay, you can see from the picture that I am hanging it to the bookshelf next to me, not to my desk; my desk is pretty thick and the base of the mount won’t fit on it. But as a nice outcome hanging it on the bookshelf, the noise coming from the keyboard is barely present.

I put the microphone’s gain to the highest; I stream in the night when the kids are asleep, and I am in a relatively quiet suburb. I position the microphone as close as possible to my face just before appearing in my camera. I am not 100% that the audience doesn’t get any pffff sound due to the high gain, but so far no one has complained :).

Camera

camera

I have the Logitech C920, it’s pretty very common amongst streamers and for a very good reason. It’s very well balanced between prices and features, I love the angle it takes, the quality of the picture, and the auto-focus. Having said that, the only way I am using it to record my face; I don’t do close up reviews to products and I don’t need to move it off the top of my screen.

Keyboard

keyboard

Late 2018 I bought one of the early models of Vortex Race 3 with Silver switch. This is absolutely not necessary for a successful stream, but mechanical keyboards are just luxurious nice feeling :D.

I kinda regret the Silver switch as I tend to make too many mistakes while touch typing. If time goes back I would get a Red switch instead.

USB Hub(s)

I would have loved to get a proper docking station, but the decent ones are expensive AND they don’t support my old VGA monitors, so I went with the hubs option instead.

Five Ports UGREEN USB-C hub

UGreen USB-C hub

It’s funny that the hub is too old now that I couldn’t find it on their website 😀 to link it in this post, here is a picture for it.

This hub takes:

  • One of the monitors
  • an Ethernet cable (a must for streaming in my opinion)
  • The microphone
  • The mouse
  • The camera

Generic USB-C VGA adapter

vga-adapter

With extra USB-A peripheral to take my keyboard and the other monitor.

key-chain USB-A to USB-C adapter

keychain-adapter

I use this adapter to connect my USB Logitech headset. I don’t use this headset’s microphone, only the headphone. Having said that, I don’t really play any music or crazy sounds during the stream, so I don’t put the headset on my head most of the time.

Studio Setup

Studio Left

Studio Right

I was lucky enough to find a relatively cheap studio setup on eBay. In preparation for this post I searched for the item on eBay and it seems the price has gone up :).

The bundle had:

  • Green, White, and Black backdrops. The material is very poor, I am not sure what it is called, but I tried to iron it once and it almost melted. It works 99% of the time, but I noticed recently that the wrinkles confused my chroma key, and I wouldn’t get a perfect removal. The good thing is that it’s not too noticeable, and most of the time during my stream the focus is on the code scene.
  • Big stand to hold the green backdrop, it consists of two extensible mounts and a 4-pieces rod to sit across them. Then you put the green backdrop on it and tighten it with clippers came with the bundle (a little bit of a hassle really).
  • Two mounts to hold the lights bulbs
  • Two 135W 5500K light bulbs
  • Two white umbrellas
  • And some accessories I don’t use (two black umbrellas and reflectors)

If time goes back, I would have changed the tools a little:

  • I would get the new shiny LED lighting that can be mounted to the desk behind the screens. There are many lighting setups like this but I think the newest one out there is the Elgato Key Light. The lightning itself is not a problem, but setting the lightning every time is just too tedious.
  • I would get an easy-setup green screen, also Elgato has a collapsible green screen that can be easily setup/taken off. For the same reason, setting this thing up and tearing it down just takes an uncomfortable time. In addition to the wrinkles problem above.

Surface Pro 4 (Not necessary)

Sometimes during my stream, I’d like to explain something on a whiteboard, and using the mouse for that isn’t really natural. So I thought I can use my old Surface Pro 4 laptop. I tried at the beginning to use NDI to stream from two laptops, but it just wouldn’t work.

So instead I used the Microsoft Whiteboard app: I use my SP4 to draw on the whiteboard, and then connect to the same whiteboard from my Mac. There can be a small delay between drawing and appearing on the screen, but it wasn’t that much. However, this setup is a little tedious and I am thinking of alternatives.

Summary

So this is my Twitch streaming setup. It’s worth mentioning that I didn’t get all of this setup at once, I accumulated it over time. I had the microphone first, then the camera, then the green screen and lightning…etc, and this was over many months.

You can also slice the budget even more if you choose lower end microphone, and a normal keyboard.

I hope this was beneficial, if you have any questions please let me know in the comments, or ping me on Twitter at @emadashi, would love to hear from you. Stay tuned for the two coming sections: Software and Humanware.

 

How to Fix Minikube Invalid Profile

TDLR; Minikube recent version might not be able to read old profiles. In  this post we will see how to fix a Minikube invalid profile, at least how I did it in my case.

minikube profile list, invalid profile

Last Saturday, I had the privilege to speak at GIB Melbourne online, where I presented about Self-hosted Azure API Management Gateway. In the presentation, I needed to demonstrate using Minikube, and I spent couple of days preparing my cluster and making sure everything is good and ready.

One day before the presentation, Minikube suggested to me to upgrade to the latest version, and I thought: “what is the worst thing that can happen”, but then then responsible part of my brain begged me not to fall into this trap, and I stopped. Thank god I did!

After the presentation I decided to upgrade, so I upgraded to version 1.8.1 (I can’t remember the version I had before) but then none of my clusters worked!

When I try to list them using the command “minikube profile list” I find it listed under the invalid profiles

Oh this is not good! Was this update a breaking change that hinders my clusters unusable? Or is it that the new Minikube version doesn’t understand the old Profile configuration? And is the only way I am supposed to solve the problem is by deleting my clusters?! I am not happy.

Can I fix the configs?

Before I worry about breaking changes, let me check what a valid profile looks like in the new update, so I created a new cluster and compared the two profiles.  You can find a cluster’s profile in .minikube/profiles/[ProfileName]/config.json.

The following are the differences that I have noticed:

comparison between the old and new minikube profile
  • There is no “MachineConfig” node in the configuration, and that most of its properties are taken one level higher in the JSON path.
  • The “VMDriver” changed to “Driver”.
  • The “ContainerRuntime” property is removed.
  • There are about 4 properties introduced
    • HyperUseExternalSwitch
    • HypervExternalAdapter
    • HostOnlyNicType
    • NatNicType
  • The “Nodes” collection is added, where each JSON node represents a Kubernetes cluster node. Each node has the following properties:
    • Name
    • IP
    • Port
    • KubernetesVersion
    • ControlPlane
    • Worker
  • In the KubernetesConfig, the Node properties are moved to the newly created collection “Nodes” mentioned above:
    • “NodeIP” moved to “IP”
    • “NodePort” moved to “Port”
    • NodeName moved to Name
    • A new property ClusterName is added

The Solution

So what I did is that I changed the old profile format to match the new format, and set the new and different properties to the values that made most sense just like above. All was straight forward except for the Node IP address; It’s missing!

Digging a little deeper I found the IP address value (and other properties)  in the machine configuration “.minikube/machines/[clustername]/config.json”. I copied these values from there and then ran my cluster to be resurrected from the dead!

I would have loved if Minikube itself took care of fixing the configs rather than suggesting to delete the profiles. Or maybe that can be a Pull Request :).

I hope this helps.

Posh-git on Mac using Oh My Zsh Themes

This post explains how to have posh-git prompt style in Oh My Zsh theme on Mac.

After 4 years of using Windows, I am coming back to using a Mac. And there are so many things in Windows I am missing already. One of these things is posh-git; I loved how in one glance to your prompt you know the status of your git repo: how many files changed, how many added, how many deleted, how many indexed… just love it!

Once I moved to Mac, I changed my shell to use zsh using Oh My Zsh due to the rich experience it brings to the terminal. I was delighted to see all these themes and plugins, and then started looking for a theme that provided the same information posh-git prompt provided. To my surprise, there was none! So I started my quest to see how I can change zsh, the theme, or the plugin to have such prompt.

A posh-git prompt that shows the number of files index and changed.

Being lazy, I wanted change an existing theme I like with the least amount of investment. I looked in the documentation to see how I could do that, and found the customisation wiki page:

Should I override the theme?

Overriding the theme seemed to be the perfect solution, however, there were couple of drawbacks:

  • When you override a theme, you override the theme, period! This means that if the author changes something after you have overridden it, you will not get these new changes.
  • It was a little bit too much for me to grasp! When I looked at avit theme as an example, I had questions like what is PROMPT and PROMPT2? What are all these special characters? Where is the reference/documentation to all of these? Are they theme-specific, or are they part of zsh theme reference?

Remember I wanted to put the least amount of effort, and I surely didn’t want to learn the whole thing! But while looking into avit theme, one thing grasped my attention: there was a clear reference to what seemed to be like a function git_prompt_info. And I thought this should be it, if I could find where this function is and how to override it.

To my luck, it was mentioned as an example in the customisation wiki page as an example!

Override the internals it is!

Ok great, now I know that I can customise git_prompt_info, all what I need is to mimic whatever posh-git does in that function!

So I hit google duckduckgo again on the hope that someone already did this, and oh my! I found that there is already a port of it on bash. That’s great, now what should I do? Replace the call of prompt_git_info in the theme with a call to __posh_git_ps1? Or should I call it from prompt_git_info? Since prompt_git_info is an internal lib function, it is probably used in many themes, thus it will make sense to just call __posh_git_pst from within. And to my good surprise, there is a GitHub issue in the posh-git-bash repo that discusses integrating with zsh, it’s even referenced in the main README.md file of the repo.

Initially I mistakenly called the __posh_git_ps1 function, but I soon realised that I need to print (echo) the git info just like prompt_git_info did rather than changing any variables, for that I should use the __posh_git_echo.

And thus I ended up with a file called emad-git-prompt.zsh under the path ~/.oh-my-zsh/custom with the content of posh-git-bash here, and at the end of the file I wrote the following code:

git_prompt_info () {

__posh_git_echo

}

I hope this helps you 🙂

Learning a New Programming Language (Go language as an Example)

Summary

This post explains why and how I learned the Go language. Hopefully this will help you to learn it quickly, or will inspire you on how to learn new languages.

The Reason to Learn a New Language

There can be many reasons why someone would want to learn a new language, the main ones to me are: 1) To solve a current business problem 2) Learn concepts to adapt to current tools 3) For fun and passion. Of course, you can have a mix of these reasons to push you to learn a new language, or maybe just one strong enough of these reasons.

For a very long time in my career, C# was my main programming language, I used JavaScript a lot too, but it has always taken a back seat until TypeScript came about, and SPA became the de facto front-end development model. So for 16 years, it has been two languages and a half for me, and I have never felt the need to learn another language (Java in university doesn’t count).

Why not Haskell or F#?

When functional programming became a thing again, I tried to find the right reason to learn F# (or Haskell), but with the explosion of technical information in our industry, time became even more scarce (I have three kids under 5!) and I really needed a stronger reason to spend my time learning a new language. Unfortunately, even with @DanielChambers continuous efforts in converting me :P, I didn’t jump to the wagon.

It’s funny that the reason why I couldn’t put the effort was exactly the reason why functional programming itself is compelling; it’s the paradigm shift. The paradigm shift was so big that organisations in the I spend most of my time helping couldn’t afford to embrace it; 20+ years of OOP meant a lot of investment in education, solutions and patterns, frameworks, and staffing that made it hard to embrace such a change.

In my experience with these organisations, there might have been situations where functional languages could have solved a problem better than an OOP one, but the return of investment would have been little in the light of the legacy of these organisation.
Of course, I am not promoting that organisations should not invest in learning and adopting new technologies; that would be the path to failure! But I’m just describing the situation of most of the organisation I worked with.

This ruled out the business-need reason for me, and I am left with “learning concepts to adapt to current tools” since passion was not just enough :P. Luckily, I am surrounded by friends who are passionate about functional programming, and I managed to learn from them enough about its benefits and how to bring that to my OOP world. Conversations with these friends and colleagues like Daniel Chambers, Thomas Koster, and attending lectures by professionals like Joe Bahari, have helped me a lot in adopting functional concepts to my C#.

I Found The Reasons in Go

Gopher

So I stayed on two languages and a half, until last year when I got the chance to work on a project in which we used Kubernetes. Once you step in the Kubernetes world you will realise that Go is the hero language; Kubernetes is written in Go, Helm is written in Go, and the templates Helm uses is based on the Go template engine. Although you can use Kubernetes without learning the Go language, once you want to get a little deeper it feels that learning Go would be an advantage.

In addition to that, with Cloud being my main interest, I have been seeing Go used more and more as the language of choice for many of the cloud-native project, products and services.

During the same time, many of my colleagues and Twitter friends have been porting their blogs from database-driven engines like WordPress to static website generators like Jekyll. I have two websites that could benefit from that, 1) my blog emadashi.com 2) and dotnetarabi.com podcast, which I built on ASP.NET and Subsonic ORM of Rob Conery’s. My friend Yaser Mehraban kept teasing me and applying his peer pressure until I surrendered, and I finally started looking into it moving my blog and my podcast to a static website generator.

My choice was Hugo; to me, it seemed the most mature static site generator with the least amount of churn and learning curve. And guess what, Hugo is written in Go! And the templating engine is based on Go’s. Same as Kubernetes, you don’t need to learn Go if you want to use Hugo, but it’s just another compelling reason to be familiar with the language.

So by now, it feels I am surrounded by problems that are being solved with Go, and it’s more evident that there is a greater possibility for me to work in Go in the future, even professionally.

All this, in addition to the low barrier of entry due to familiarity with C#, encouraged me to jump to the waters.

Where did I Start?

There are so many ways a person can start learning a language, to me I wanted to learn the language fast and learn just enough to get me going. For this reason, I didn’t pick up a book that would take me a while to learn, even though a book is probably the most profound way.

Instead of picking up a book, I went to https://golang.org and checked what the website has to offer; most of the modern projects and languages have documentation that includes tutorials and Getting Started guide. If these guides are well crafted it would be a great learning boost, and to my luck Go had great content.

Set-up

The first thing I wanted to do is to set up the environment and run the most basic example (the hello world of Go), for that I followed the Getting Started guide. Setting up the environment as a basic step for learning a language is very important; it will give you an understanding of the requirements of the language and will set up some expectation on how friendly the experience is to you, it breaks the ice. Also, it paves the way to the Hands-On step coming soon; I will explain this step later in this article.

Foundation

Now that my environment is setup and I ran my hello world example, I needed to understand what is really going on: how the code compiles, how it runs, how it is packaged, how it is hosted; I needed the foundational concepts to establish a firm ground to base my learning on. Learning the syntax and the various Go features will come along, and it will take time, but you can’t postpone the foundations. For this, I followed the “How to Write Go Code” guide. The article’s title might not sound too foundational, but the content lays the concepts.

Cruise as you need

If this is NOT your first programming language to learn, then you are already familiar with the concepts of structure flow: functions, loops, if clauses,…etc. This gives you a very good advantage to sweep through these swiftly; it’s unlikely that these are too different from other languages. A fast run through should be enough to capture anything standing out.

For this I used the Tour; there are two great things about the tour: 1) it has a simple navigatable structure 2) it is associated with an online playground where you can experiment and confirm your understanding on the spot. There is a wide range of topics covered in the Tour, some of which I would go through fast, and some I would take my time to comprehend; e.g. Slices can be little confusing compared to arrays in C#.

Note: Everyone’s experience is different, so it will not make sense to list the topics I went through swiftly and the ones I spent time on, use your own experience to judge that for yourself.

As for the advanced topics I left out a little until I had a better grasp on the basics of the language; overwhelming yourself with advanced topics at this stage might have a counter effect on your learning.

Hands-On

After understanding the basics from the How to Write Go code, and sweeping through the Tour, it was time to have my hands on the language; this is the only way you can really understand and learn a language.

I needed a problem to solve so I can have a driving purpose. The problem I chose is to import the existing records of DotNetArabi from the database (guests and episodes) to create corresponding Markdown files for the Hugo website, so this was my first program.

It’s important to understand here that I wasn’t 100% on top of things yet (neither now :P), but it was the practical experience that I relied on to grasp the concepts and gain the experience. If you leave the practical side for too long you will find yourself forgetting the basics, or that you are learning too advanced topics that you will rarely use. An iterative approach is very good here.

So I gradually built the application; each time I am stuck I’d either refer back to the Tour, or google it if it is not covered there (e.g. connecting to a database). In each of these stuck-and-solved situations, I take a moment to make sure I understand the solution and the technique behind it. Copy and paste is absolutely fine as long as you pause and comprehend.

Advanced Topics

Ok now at this stage I feel like I know the basics, and I am comfortable writing a program without big issues. But at this stage, writing a program in Go would give me very little advantage (if any) over writing it in another language; I am not getting the best out of the language. It’s the advanced features that make the difference, things like goroutines and channels by which we achieve concurrency with the least amount of maintenance overhead.

Don’t be afraid of the advanced topics; avoiding it the advanced topic because they might be complicated will jeopardise the value we are getting from learning a language in the first place!

So for this, I continued the Tour above for the advanced topics. The playground was of tremendous value as you will need to change things around to confirm your understanding. Also, the Tour has some exercises that will poke your thoughts, I highly advise trying these out! This will not just push you to comprehend the concepts, but it will also expand your horizons for the use cases that you might need these advanced features.

It would be great fun and value if you can go back to your pet project and try to implement some of these advanced concepts, and this is what I did. I went back to my application and utilised goroutines to extract the data to the markdown files.

Unit Testing

Leaving unit tests to the end wasn’t undermining their value, rather I wanted to focus on the language itself first; test frameworks and push the complexity and the learning curve high enough. My experience from JavaScript stings until now :P.

The Best of Go

Finally, Go website has a section called “Effective Go“. This section is not really a referential documentation, but it can be very valuable so that you write the Go code as the language has intended it to be like. It provides further context and rounded styling to writing the language in the best form.

I also here advise to pick and choose the topics, reading the whole thing might be counter-productive.

Close the Loop, Complete the Picture

By now you’d think you finished, but this is just the beginning; now is the time to tie things together by revising the language’s main characteristics, philosophy, and the greatest advantages.

If we look specifically at Go, as our example, this might be things like the simplicity of Go, where there no classes, no inheritance, or generics. Or things like concurrency and how Go deals with State in asynchronous code execution. At this stage, it will be valuable to check the videos, like Sameer Ajmani’s talk, and the literature out there that discuss “Why Go“.

I also found the FAQ in golang.org a valuable resource for some of the justifications and explanations. You should not read this as an article though, pick and choose the topics of interest.

But isn’t this backward? Shouldn’t I learn about these things at the beginning? True, you can learn these at the beginning, but you will not value the claims until you try and put your hands on the problem in practice, until then it will be merely claims in the air. So even if you start with these, you should also revise them and make sure you tie the loop.

Conclusion

In my journey to learn Go, I did the following:
• I had a good reason
• I established the core concepts
• I installed the tools and ran the “hello world” program
• I scanned through the structure flow
• I put my hands on the code and wrote the first program
• Read the advanced topics, and used the playground to confirm my understanding
• Watched more videos on why to use Go and its advantages

It’s important to say here that choosing a language to adopt for in an organisation involves more than just learning it. If you are in a position to influence a decision just be mindful of that.

I hopes this helps you out, enjoying coding :).

RBAC in Azure Kubernetes Service AKS on Twitch!

tldr; I will be streaming on Twitch next Monday (25th of March) at 8:30 Melbourne time (GMT+11), configuring Azure Kubernetes AKS to use RBAC.

Twitch logo

For a long while, I’ve been thinking about streaming live development to Twitch or YouTube. Having spent some time behind the microphone while making DotNetArabi podcast, I can say there is a satisfiying feeling in producing content in a media format through which you can connect with the audience.

Why not just offline video?

I could just record an offline video and host it on YouTube, and it’s definitely a valuable medium. The problem with educational videos, specifically, is that it is a one-way communication channel, and without the entertainment factor, unlike movies, these videos can be daunting, imprisoning, and hard to follow.

The magic of live streaming

But with live streaming magic happens; it adds additional dimensions that make it more appealing:

  1. It’s LIVE! It’s happening NOW, and this means couple of things: it implicitly has the anticipation factor; things are still happening and it might take interesting turns, just like live sports. In addition to that, by sharing the time span during which the event is happening, the audience gets the feeling of involvement and “I was there when it happened”, even if the audience didn’t directly interact with the broadcaster.
  2. It’s real and revealing: When I was doing my homework preparing for this, I talked to my colleague Thomas Koster, and when I asked him about what could interest him in live streaming, his answer was:
    …it’s probably more the real time nature of it that appeals – to see somebody’s thought processes in action, as long as the broadcaster doesn’t waste too much time going around in circles.
    For example, watching somebody figure out a puzzle solution in the game The Witness in real time is much more interesting and valuable than watching a rehearsed, prepared performance of only the final solution.

    This is the ultimate stage for a developer broadcaster; it requires a lot of bravery and experience. I’d love to be able to do this soon, but it’s really the 3rd reason below that drew me to streaming.

  3. It’s two-way communication: the interactive communication between the broadcaster and the audience brings the video to life. It provides timely opportunity to get the best out of this communication, whether it was by the audience correcting the broadcaster, or the broadcaster being available for immediate inquiries.

Specifically for this last reason, I became interested in live streaming; I want this relation with my audience; to have a collaborative experience where value is coming from everyone and going in all directions.

So, I am doing my first stream!

I have been following Jeff Fritz @csharpfritz and Suz Hinton @noopkat and greatly inspired by their amazing work! Also @geoffreyhuntley have started his journey and gave me the last nudge to jump into this space. I’ve learned a lot from Suz’s post “Lessons from my first year of live coding on Twitch“, and recently Jeff’s “Live Streaming Setup – 2019 Edition” (don’t let it scare you,  you don’t have to do it all!).

My next stream will be about Role Based Access Control (RBAC) in Azure Kubernetes AKS, I will walk you through RBAC, OAuth2 Device Flow, and how this works within Azure AKS, with hands-on live deployments and configuration.

What is my goal, and what is not?

What I am trying to achieve here is two-way communication through the session I have with my audience, that’s it.

Am I going to do this constantly now?

Actually, I don’t know! To me this is an experiment; I might keep doing it, or this might be my first AND LAST stream, let’s see what the future brings. 🙂

Fix “Mixed Content” When Using Cloudflare SSL And IIS Rewrites

In this post, I explain how I fixed the “mixed content” security issue when using Cloudflare Flexible SSL, and IIS Rewrite.

I Run Two Websites Under One Account Using IIS Rewrites

I have two websites that are hosted under one account with my hosting provider (I know!): https://emadashi.com and https://dotnetarabi.com. The way I do it is that is by using IIS Rewrite rules in my web.config; any request that is targeting one of these domains, I “rewrite” the URL so it is pointing to the sub-directory to serve the request. This changes where the file is served from, but does not change the request URL to the user.

However, if by any chance a request came to the server targeting the sub-directory itself,  that page will still be served as is, which is not desirable as I don’t want to expose the inner of my websites; it’s ugly and bad for my websites’ URL discovery. In this case, first I want to “redirect” the user to point to the domain without the sub-directory; and then run the rewrite rule as mentioned above, which I did.

In psudo, when a request comes the execution of the rules looks like this:

  1. Rule1: Does the URL include a sub-directory? If so then Redirect to the same URL without the sub-directory.
  2. Rule2: The URL does not include the sub-directory, so Rewrite (not Redirect) to the sub-directory.

I want to Serve My Websites Over HTTPS, But…

Now when I wanted to secure my websites and start using HTTPS to serve requests, thanks to Troy Hunt’s continuous nagging :P, I couldn’t just use normal certs with my hosting due to the way I am running it. So again, based on Troy Hunt’s awareness efforts, I used Cloudflare’s Flexible SSL free service.

This went fine until I discovered that engine of dotnetarabi generated guests images’s URLs including the sub-directory. When I open dotnetarabi over HTTP, the first request to these URLs is HTTPS, but of course containing the sub-directory, the second request though (which is a redirect to the URL without the sub-directory) is always coming back as HTTP! This caused the known “unsecure; mixed content” problem.

Simply, the reason is that:

  1. With Flexible SSL, Cloudflare communicates to your server view HTTP ALWAYS; you don’t have certs, this is why you need them in the first place!
  2. Cloudflare Flexible SSL doesn’t force HTTPS if you haven’t explicitly asked it to (via the Always Use HTTPS option). So if the request came view HTTP, it will pass it through as HTTP.

So in the the case of my redirects above, what happens is the following:

  1. The request comes to Cloudflare via HTTPS, the URL include the sub-directory
  2. The request is forwarded to my server via HTTP (NOT HTTPS!) to the sub-directory
  3. My server innocently redirects the request to the URL without the sub-directory, but using the same protocol the current request is using, which is HTTP because it will always be!
  4. The user receives the redirection to the new URL, but with the HTTP protocol this time, and then Cloudflare just passes it through because it does not force HTTPS.

The solution

The trick was that it’s true that Cloudflare does not use HTTPS when it forwards the request to your server, but what it does is that it adds the header X-FORWARDED-PROTO=https to the requests to your server if the original request was using HTTPS.

So, all what I needed to do is to check on this header in my redirects; if it exists then redirect to HTTPS, otherwise redirect to HTTP:

The Action part of my rule:

<action type="Redirect" url="{MapSSL:{HTTP_X_FORWARDED_PROTO}}dotnetarabi.com/{C:1}" appendQueryString="true" logRewrittenUrl="false" />
<rewriteMaps>
  <rewriteMap name="MapSSL" defaultValue="https://">
    <add key="https" value="https://" />
    <add key="http" value="http://" />
  </rewriteMap>
</rewriteMaps>

 

HTTP Binding in PowerShell Azure Functions

In a small project, I was trying to utilize an existing PowerShell I had, and host it in Azure Functions; I needed to understand how HTTP binding work with PowerShell Azure Functions as I didn’t want to rewrite my script to C# just because the PowerShell Azure Functions had the “(Preview)” appended to its name.

I wanted the Function to return a plain text response to an HTTP trigger based on a query parameter (this is how Dropbox verifies Webhook URLs). So, naively, I followed the basic template as an example:

Write-Output "PowerShell HTTP function invoked"

if ($req_query_name) 
{
	$message = "$req_query_name"
}
else
{
	$message = "wrong!"
}

[io.file]::WriteAllText($res, $message)

The first question I had was “how is the querystring parsed?” I assumed that I should replace “req_query_name” with the querystring key in the request, should I replace the whole thing to become $myQueryParam? This is when I decided to look in the source code rather than the documentation.

Note: I try to link back to the source code wherever I can, the problem is the link does not include the commit ID, so next to the link I put the commit ID at which the file was in that state.

HTTP Binding

There are different phases that take place during a Function execution, in this post I will skip the details of how the binding is loaded, and only concentrate on how the HTTP binding operates within a PowerShell Function.

Input

When the Azure Functions runtime receives an HTTP message for PowerShell script that has HTTP binding, it parses the message according to the following:

  • The body of the HTTP request will be saved to a temp file, the path of the temp file will be assigned to an environment variable that matches the “Name” property of the input binding configuration. If we take the following JSON as an example for our “function.json” configuration, then the name of the variable will be “req“:
    {
       "bindings": [
       {
         "name": "req",
         "type": "httpTrigger",
         "direction": "in",
         "authLevel": "function"
        },
        {
          "name": "res",
          "type": "http",
          "direction": "out"
        }
      ],
      "disabled": false
    }
    

    (This happens here at dcc9e1d)

  • The original URL will be saved in environment variable “REQ_ORIGINAL_URL“.
  • The HTTP request method will be saved in environment variable “REQ_METHOD“.
  • For each HTTP header “key”, a corresponding environment variable “REQ_HEADERS_key” will be created
  • The full querystring will be saved in environment variable “REQ_QUERY“, it will also be further parsed into individual variables; for each query string “key”, a corresponding variable “REQ_QUERY_key” will be created.

All of this happen before the execution of the Function, so once the Function is invoked these variables are already available for consumption. (This happens here at dcc9e1d ).

To read the body of the request you just read it as you read any file PowerShell, and then you parse it according to the content; so if the body of the request is JSON you read the file and parse it to JSON like the following:

$mycontent = Get-Content $req | ConvertFrom-Json

Note: If the Function is executing because of a Triggered bindings (such as HTTP), the rest of the input bindings are skipped. (Check the code here at commit dcc9e1d)

Output

Similar to the request, your script should write the response to a file, which in turn will be read by the Azure Functions runtime, and then will pass it to the HTTP output binding to send it on your behalf . The runtime will also assign the path of this file to an environment variable that  matches the Name property you define in the output binding in the function.json.

So for the example above of function.json, you will write the content of your response to the file whose path is stored in “res”:

[io.file]::WriteAllText($res, $message)

This happens here at commit dcc9e1d.

Default Behaviour

Now, if the content you write to the file is a string that cannot be parsed to JSON, then: it will be considered as the body of the HtttpMessage,  the response will have the default HTTP content-type “application/json”, and it will be run through the default MediaTypeFormatter. Take the following as an example:

Function:

 $message = "This is a text"
[System.IO.File]::WriteAllText($res,$message)

Result:

Content-Type: application\json

"this is a text"

Notice that the text written to the file in the script is without quotes, but the result in the response body is in double quotes; this is because the default content-type of the response is “application/json”, and the HTTP binding will format it accordingly and wrapp in double quotes.

More Control

If we want more control over the response then you have to write JSON object to the file, this JSON object will hold all the information on how the response should look like: the headers, the body, and the response status.

The JSON object should contain the properties: “body“, “headers“, “isRaw” (more about it below), and “statusCode” (int) if you want to change any. For example, if I want the content of the response to be simple text with plain/text content-type , then the script should write the following:

$message = "{ `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"$name`"}"
[System.IO.File]::WriteAllText($res,$message)

There are several points that need to be brought up:

  1. If the “body” property exists, then only the value of the “body” property will be in the HttpMessage body, otherwise the whole content of the JSON object will be in the HttpMessage body.
  2. Up until the time of writing this post, Azure PowerShell functions runs under PowerShell 4.0, this means that if you want to use the Out-File command to write to the file, then it will always append a new line feed (\r\n) at the end of the string, even if you supply the -NoNewLine parameter! Use the WriteAllText command instead.

The parsing can be found here at commit 3b3e8cb.

Formatters

Great, so far we managed to change the body, the headers (including the content-type), and the status of the response. But this is also not enough; depending on the content-type header, the Azure Functions runtime will find the right MediaFormatter for the content and format the response body with the right format.

There are several types of MediaFormatters in the System.Net.Http.Formatting library: JsonMediaTypeFormatter, FormUrlMediaFormatter, XmlMediaTypeFormatter, and others. The issue with the formatters is that it might add the UTF-8 Byte Order Mark (BOM) at the beginning of the content. If the recipient is not ready for this it might cause a problem.

Dropbox, for example, provides a way to watch the changes to a file through their API by registering a webhook, and the way Dropbox verifies the webhook is by making a request to the endpoint with a specific querystring, then it expects the webhook to respond by echoing the querystring back. When I created my Function I didn’t change anything, thus the runtime used the default formatter and appended the UTF-8 BOM characters (0xEF,0xBB,0xBF) to the beginning of the body, which of course was revoked by Dropbox.

The way to skip these formatters is by setting the “isRaw” property mentioned above to true. For example, the following script will write a plain text “emad1234” to the response:

$message = "{ `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"emad1234`" }"


Taking a screenshot from Fiddler from the HexView view, the response look like this:

 

BOM characters in response of PowerShell Azure Function

Have you noticed the characters I surrounded with the red box? that’s the BOM, displayed as ““.

But once we add the “isRaw” property like this:

$message = "{ `"isRaw`": true, `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"emad1234`" }"

The result will be without the BOM:

FiddlerAzureFunctionWithNoBOM

This can be found here at commit 3b3e8cb.

 

Final Notes

It’s worth mentioning that Azure Functions runtime also provides content-negotiation feature, and you can leave it to the request to decide.

Another departing thought is that of course you don’t have to craft your JSON object by concatenating strings together, you can use PowerShell arrays and hashtables to do that, check the articles here and here.

Finally, isn’t it awesome to be able to see that in the source code!

Conclusion

PowerShell probably is the language that got the least love from the Azure Functions team, but this does not mean that you throw your scripts away, hopefully with the tips in this post you will find a way to use them again.

مساعدة في دوت نت عربي

نشأ دوت نت عربي منذ ثمان سنوات ليكون من أوائل المواقع العربية التي تقدم محتوى عربيا ذا جودة عالية، قدم من خلالها العديد من الحلقات مع نجوم تقنيين عرب أصحاب خبرة طويلة و أداء مميز. بدأ البودكاست بجهود فردية و نفقة شخصية غير ربحية، و استمر عدة سنوات بأداء جيد و بمعدل حلقة كل أربعة أسابيع و بشكل مستمر.

و لكن خلال السنتين السابقتين بدأ إصدار الحلقات بالتباطئ و باتت الفترة بين الحلقة و الأخرى تطول على الرغم من كل محاولات زيادة الانتاج. فكرة إخراج العمل من دائرة العمل الفردي إلى دائرة العمل الجماعي لم تغب عني و منذ سنوات، لكن لم أستطع إيجاد آلية واضحة و عملية يمكن الاعتماد عليها لتحويل العمل من فردي إلى جماعي تطوعي، و يمكن من خلالها اغتنام ما قدمه بعض المستمعين المخلصين من رغبة في المشاركة في هذا العمل. استمر الأمر كما هو عليه حتى كان لا بد من الخوض في فكرة العمل الجماعي التطوعي حتى و لو بأبسط الأدوات. و “أن تأت متأخرا خيرا من أن لا تأت”.

و بناء على هذا، و بناء على استشارة بعض الأصدقاء و الأصحاب، أود أن أفتح باب المشاركة في دوت نت عربي لإنتاج الحلقات بشكل أسرع و بجودة عالية. لتسهيل عملية المشاركة لا بد من شرح عملية إنتاج الحلقات و سرد الخطوات، و بالتالي سيسهل على المتطوع اختيار ما يمكن المساهمة به.

خطوات الإنتاج

أولا: إيجاد الضيف المناسب

في هذه الخطوة أقوم بالبحث عن ضيف مناسب للبرنامج. يتطلب من الضيف أن يكون صاحب خبرة في مجاله، و الطرق المتاحة لإثبات هذا هي:
• البحث عن إصدارات و منشورات للضيف مثل مدونة أو مقالات ذات جودة عالية.
• البحث عن مساهمات للضيف على موقع GitHub.
• حيازته على منصب تقني متقدم في شركته
• أو أن يتم التدليل عليه من شخص موثوق بشكل مباشر
لا بد من التنويه هنا أننا لا نحصر الحرفية في من حاز هذه المناصب أو الإنجازات، فهناك الكثير من المحترفين الذين لم تسنح لهم الفرصة للقيام بهذه الأعمال، لكن بالنسبة لدوت نت عربي هذه هي الطريقة المتاحة للتأكد من قدرة الضيف.

فمن يرغب بالتطوع لهذه المهمة سيقوم بالبحث عن الضيف و من ثم سيطلعني على بعض الروابط التي وجدها التي تسرد إنجازات الضيف.

يجدر بالذكر أن هذه الخطوة مفتوحة للجميع دون الحاجة لتنسيق.

ثانيا: ترتيب الموعد

في هذه الخطوة أقوم بالاتصال بالضيف و إخباره عن دوت نت عربي، و أعرض عليه بتسجيل حلقة معه. إن قبل الضيف نشرع بترتيب موعد لتسجيل الحلقة و طرح النقاط المحورية في الحلقة المرتقبة.

ثالثا: تسجيل الحلقة

في هذه الخطوة يتم تسجيل الحلقة مع الضيف من خلال سكايب Skype.

في الخطوتين السابقتين: “ترتيب الموعد” و “تسجيل الحلقة” أظنه من الصعب أن يقوم بهذه الخطوة غير مقدم البرنامج.

رابعا: الإنتاج الفني

بعد تسجيل الحلقة ينتج ملف صوتي MP3 يحتاج لمعالجة، و هي تتضمن ما يلي:
• قص المقاطع التي فيها أخطاء و عثرات و إطالات غير مرغوبة مثل: “آااااا”
• تحسين جودة الصوت من خلال تصفيته بالمصفيات الصوتية التقنية
• إنشاء ملف MP3 جديد و تعديل خصائصه مثل عنوان الملف، و الأيقونة، و غيرها.

تتطلب هذه الخطوة بعض الفنيات، لا تحتاج الكثير من العلم لكن تحتاج إلى الممارسة. و لذلك فإنه من المتوقع أن أقوم بتدريب المتطوع على كيفية الإنتاج، و أن تتم مراجعة الحلقات الأولى بشكل دقيق قبل تسليم المهمة بشكل تام.

خامسا: نشر الحلقة

تتضمن هذه الخطوة رفع ملف الـ MP3 إلى الموقع، و كتابة المقدمة عل الموقع، و نشر الخبر على مواقع التواصل الاجتماعي. و هذه الخطوة أيضا تتضمن بعض المعلومات التقنية، سأساعد من يتقدم لهذه المهمة في البداية بالتأكيد.

و بهذه الخطوات الخمس تتم الحلقة و يبدأ المشوار بحلقة أخرى.

آلية التعاون

سيسرد المتطوع المهام التي يرغب بالتطوع لها، و قد يتقدم لنفس المهمة عدد من المتطوعين. بناء على ذلك سيكون لكل حلقة تنسيق مختلف يعتمد على جدول المتطوع و قدرته على توفير الوقت. الأداة التي اخترتها لتنسيق هذه الخطوات بين المتطوعين هي تريللو trello.com و هي أداة مبنية على فكرة ما يقال له “لوح كانبان Kanban Board” حيث سيكون لكل حلقة بطاقة تنتقل بين الخطوات التي ستمثل على شكل عمدان على هذا اللوح.

سيتاح لكل متطوع التقاط بطاقة في عامود خطوة معينة يرغب في العمل بها، سيسندها لنفسه حتى إنهاء العملية ثم يدفعها لعامود الخطوة التالية، و هكذا.

“ما الفائدة التي سأحصل عليها إن تطوعت؟”

و قد يسأل سائل: “ما الفائدة التي سأحصل عليها إن تطوعت؟”، إضافة إلى إسهامك في تنمية معلومات الآخرين و إثراء المحتوى العربي على الإنترنت، سيتم شكر كل من يتطوع للمشاركة في هذا العمل، و بما أن دوت نت عربي ليس مؤسسة ربحية سيكون الشكر بذكر كل من شارك في إصدار الحلقة في ملخص الحلقة على الموقع.

ماذا الآن؟

إذا كنت ترغب في المشاركة في إنتاج حلقات دوت نت عربي أرسل رسالة إلى: “emad.ashi” على بريد الـجيميل GMail، و سيتم الترتيب معك و شرح ما لم يتسن شرحه في هذا المقال. و إن لم ترغب في المشاركة و كان لديك أي نصيحة أو تعليق أو نقد فرجاء لا تترد بإرساله أيضا.

شكرا لكم على اهتمامكم و لنبق على اتصال.