Author Archives: Emad Alashi

How to Fix Minikube Invalid Profile

TDLR; Minikube recent version might not be able to read old profiles. In  this post we will see how to fix a Minikube invalid profile, at least how I did it in my case.

minikube profile list, invalid profile

Last Saturday, I had the privilege to speak at GIB Melbourne online, where I presented about Self-hosted Azure API Management Gateway. In the presentation, I needed to demonstrate using Minikube, and I spent couple of days preparing my cluster and making sure everything is good and ready.

One day before the presentation, Minikube suggested to me to upgrade to the latest version, and I thought: “what is the worst thing that can happen”, but then then responsible part of my brain begged me not to fall into this trap, and I stopped. Thank god I did!

After the presentation I decided to upgrade, so I upgraded to version 1.8.1 (I can’t remember the version I had before) but then none of my clusters worked!

When I try to list them using the command “minikube profile list” I find it listed under the invalid profiles

Oh this is not good! Was this update a breaking change that hinders my clusters unusable? Or is it that the new Minikube version doesn’t understand the old Profile configuration? And is the only way I am supposed to solve the problem is by deleting my clusters?! I am not happy.

Can I fix the configs?

Before I worry about breaking changes, let me check what a valid profile looks like in the new update, so I created a new cluster and compared the two profiles.  You can find a cluster’s profile in .minikube/profiles/[ProfileName]/config.json.

The following are the differences that I have noticed:

comparison between the old and new minikube profile
  • There is no “MachineConfig” node in the configuration, and that most of its properties are taken one level higher in the JSON path.
  • The “VMDriver” changed to “Driver”.
  • The “ContainerRuntime” property is removed.
  • There are about 4 properties introduced
    • HyperUseExternalSwitch
    • HypervExternalAdapter
    • HostOnlyNicType
    • NatNicType
  • The “Nodes” collection is added, where each JSON node represents a Kubernetes cluster node. Each node has the following properties:
    • Name
    • IP
    • Port
    • KubernetesVersion
    • ControlPlane
    • Worker
  • In the KubernetesConfig, the Node properties are moved to the newly created collection “Nodes” mentioned above:
    • “NodeIP” moved to “IP”
    • “NodePort” moved to “Port”
    • NodeName moved to Name
    • A new property ClusterName is added

The Solution

So what I did is that I changed the old profile format to match the new format, and set the new and different properties to the values that made most sense just like above. All was straight forward except for the Node IP address; It’s missing!

Digging a little deeper I found the IP address value (and other properties)  in the machine configuration “.minikube/machines/[clustername]/config.json”. I copied these values from there and then ran my cluster to be resurrected from the dead!

I would have loved if Minikube itself took care of fixing the configs rather than suggesting to delete the profiles. Or maybe that can be a Pull Request :).

I hope this helps.

Posh-git on Mac using Oh My Zsh Themes

This post explains how to have posh-git prompt style in Oh My Zsh theme on Mac.

After 4 years of using Windows, I am coming back to using a Mac. And there are so many things in Windows I am missing already. One of these things is posh-git; I loved how in one glance to your prompt you know the status of your git repo: how many files changed, how many added, how many deleted, how many indexed… just love it!

Once I moved to Mac, I changed my shell to use zsh using Oh My Zsh due to the rich experience it brings to the terminal. I was delighted to see all these themes and plugins, and then started looking for a theme that provided the same information posh-git prompt provided. To my surprise, there was none! So I started my quest to see how I can change zsh, the theme, or the plugin to have such prompt.

A posh-git prompt that shows the number of files index and changed.

Being lazy, I wanted change an existing theme I like with the least amount of investment. I looked in the documentation to see how I could do that, and found the customisation wiki page:

Should I override the theme?

Overriding the theme seemed to be the perfect solution, however, there were couple of drawbacks:

  • When you override a theme, you override the theme, period! This means that if the author changes something after you have overridden it, you will not get these new changes.
  • It was a little bit too much for me to grasp! When I looked at avit theme as an example, I had questions like what is PROMPT and PROMPT2? What are all these special characters? Where is the reference/documentation to all of these? Are they theme-specific, or are they part of zsh theme reference?

Remember I wanted to put the least amount of effort, and I surely didn’t want to learn the whole thing! But while looking into avit theme, one thing grasped my attention: there was a clear reference to what seemed to be like a function git_prompt_info. And I thought this should be it, if I could find where this function is and how to override it.

To my luck, it was mentioned as an example in the customisation wiki page as an example!

Override the internals it is!

Ok great, now I know that I can customise git_prompt_info, all what I need is to mimic whatever posh-git does in that function!

So I hit google duckduckgo again on the hope that someone already did this, and oh my! I found that there is already a port of it on bash. That’s great, now what should I do? Replace the call of prompt_git_info in the theme with a call to __posh_git_ps1? Or should I call it from prompt_git_info? Since prompt_git_info is an internal lib function, it is probably used in many themes, thus it will make sense to just call __posh_git_pst from within. And to my good surprise, there is a GitHub issue in the posh-git-bash repo that discusses integrating with zsh, it’s even referenced in the main file of the repo.

Initially I mistakenly called the __posh_git_ps1 function, but I soon realised that I need to print (echo) the git info just like prompt_git_info did rather than changing any variables, for that I should use the __posh_git_echo.

And thus I ended up with a file called emad-git-prompt.zsh under the path ~/.oh-my-zsh/custom with the content of posh-git-bash here, and at the end of the file I wrote the following code:

git_prompt_info () {



I hope this helps you 🙂

Learning a New Programming Language (Go language as an Example)


This post explains why and how I learned the Go language. Hopefully this will help you to learn it quickly, or will inspire you on how to learn new languages.

The Reason to Learn a New Language

There can be many reasons why someone would want to learn a new language, the main ones to me are: 1) To solve a current business problem 2) Learn concepts to adapt to current tools 3) For fun and passion. Of course, you can have a mix of these reasons to push you to learn a new language, or maybe just one strong enough of these reasons.

For a very long time in my career, C# was my main programming language, I used JavaScript a lot too, but it has always taken a back seat until TypeScript came about, and SPA became the de facto front-end development model. So for 16 years, it has been two languages and a half for me, and I have never felt the need to learn another language (Java in university doesn’t count).

Why not Haskell or F#?

When functional programming became a thing again, I tried to find the right reason to learn F# (or Haskell), but with the explosion of technical information in our industry, time became even more scarce (I have three kids under 5!) and I really needed a stronger reason to spend my time learning a new language. Unfortunately, even with @DanielChambers continuous efforts in converting me :P, I didn’t jump to the wagon.

It’s funny that the reason why I couldn’t put the effort was exactly the reason why functional programming itself is compelling; it’s the paradigm shift. The paradigm shift was so big that organisations in the I spend most of my time helping couldn’t afford to embrace it; 20+ years of OOP meant a lot of investment in education, solutions and patterns, frameworks, and staffing that made it hard to embrace such a change.

In my experience with these organisations, there might have been situations where functional languages could have solved a problem better than an OOP one, but the return of investment would have been little in the light of the legacy of these organisation.
Of course, I am not promoting that organisations should not invest in learning and adopting new technologies; that would be the path to failure! But I’m just describing the situation of most of the organisation I worked with.

This ruled out the business-need reason for me, and I am left with “learning concepts to adapt to current tools” since passion was not just enough :P. Luckily, I am surrounded by friends who are passionate about functional programming, and I managed to learn from them enough about its benefits and how to bring that to my OOP world. Conversations with these friends and colleagues like Daniel Chambers, Thomas Koster, and attending lectures by professionals like Joe Bahari, have helped me a lot in adopting functional concepts to my C#.

I Found The Reasons in Go


So I stayed on two languages and a half, until last year when I got the chance to work on a project in which we used Kubernetes. Once you step in the Kubernetes world you will realise that Go is the hero language; Kubernetes is written in Go, Helm is written in Go, and the templates Helm uses is based on the Go template engine. Although you can use Kubernetes without learning the Go language, once you want to get a little deeper it feels that learning Go would be an advantage.

In addition to that, with Cloud being my main interest, I have been seeing Go used more and more as the language of choice for many of the cloud-native project, products and services.

During the same time, many of my colleagues and Twitter friends have been porting their blogs from database-driven engines like WordPress to static website generators like Jekyll. I have two websites that could benefit from that, 1) my blog 2) and podcast, which I built on ASP.NET and Subsonic ORM of Rob Conery’s. My friend Yaser Mehraban kept teasing me and applying his peer pressure until I surrendered, and I finally started looking into it moving my blog and my podcast to a static website generator.

My choice was Hugo; to me, it seemed the most mature static site generator with the least amount of churn and learning curve. And guess what, Hugo is written in Go! And the templating engine is based on Go’s. Same as Kubernetes, you don’t need to learn Go if you want to use Hugo, but it’s just another compelling reason to be familiar with the language.

So by now, it feels I am surrounded by problems that are being solved with Go, and it’s more evident that there is a greater possibility for me to work in Go in the future, even professionally.

All this, in addition to the low barrier of entry due to familiarity with C#, encouraged me to jump to the waters.

Where did I Start?

There are so many ways a person can start learning a language, to me I wanted to learn the language fast and learn just enough to get me going. For this reason, I didn’t pick up a book that would take me a while to learn, even though a book is probably the most profound way.

Instead of picking up a book, I went to and checked what the website has to offer; most of the modern projects and languages have documentation that includes tutorials and Getting Started guide. If these guides are well crafted it would be a great learning boost, and to my luck Go had great content.


The first thing I wanted to do is to set up the environment and run the most basic example (the hello world of Go), for that I followed the Getting Started guide. Setting up the environment as a basic step for learning a language is very important; it will give you an understanding of the requirements of the language and will set up some expectation on how friendly the experience is to you, it breaks the ice. Also, it paves the way to the Hands-On step coming soon; I will explain this step later in this article.


Now that my environment is setup and I ran my hello world example, I needed to understand what is really going on: how the code compiles, how it runs, how it is packaged, how it is hosted; I needed the foundational concepts to establish a firm ground to base my learning on. Learning the syntax and the various Go features will come along, and it will take time, but you can’t postpone the foundations. For this, I followed the “How to Write Go Code” guide. The article’s title might not sound too foundational, but the content lays the concepts.

Cruise as you need

If this is NOT your first programming language to learn, then you are already familiar with the concepts of structure flow: functions, loops, if clauses,…etc. This gives you a very good advantage to sweep through these swiftly; it’s unlikely that these are too different from other languages. A fast run through should be enough to capture anything standing out.

For this I used the Tour; there are two great things about the tour: 1) it has a simple navigatable structure 2) it is associated with an online playground where you can experiment and confirm your understanding on the spot. There is a wide range of topics covered in the Tour, some of which I would go through fast, and some I would take my time to comprehend; e.g. Slices can be little confusing compared to arrays in C#.

Note: Everyone’s experience is different, so it will not make sense to list the topics I went through swiftly and the ones I spent time on, use your own experience to judge that for yourself.

As for the advanced topics I left out a little until I had a better grasp on the basics of the language; overwhelming yourself with advanced topics at this stage might have a counter effect on your learning.


After understanding the basics from the How to Write Go code, and sweeping through the Tour, it was time to have my hands on the language; this is the only way you can really understand and learn a language.

I needed a problem to solve so I can have a driving purpose. The problem I chose is to import the existing records of DotNetArabi from the database (guests and episodes) to create corresponding Markdown files for the Hugo website, so this was my first program.

It’s important to understand here that I wasn’t 100% on top of things yet (neither now :P), but it was the practical experience that I relied on to grasp the concepts and gain the experience. If you leave the practical side for too long you will find yourself forgetting the basics, or that you are learning too advanced topics that you will rarely use. An iterative approach is very good here.

So I gradually built the application; each time I am stuck I’d either refer back to the Tour, or google it if it is not covered there (e.g. connecting to a database). In each of these stuck-and-solved situations, I take a moment to make sure I understand the solution and the technique behind it. Copy and paste is absolutely fine as long as you pause and comprehend.

Advanced Topics

Ok now at this stage I feel like I know the basics, and I am comfortable writing a program without big issues. But at this stage, writing a program in Go would give me very little advantage (if any) over writing it in another language; I am not getting the best out of the language. It’s the advanced features that make the difference, things like goroutines and channels by which we achieve concurrency with the least amount of maintenance overhead.

Don’t be afraid of the advanced topics; avoiding it the advanced topic because they might be complicated will jeopardise the value we are getting from learning a language in the first place!

So for this, I continued the Tour above for the advanced topics. The playground was of tremendous value as you will need to change things around to confirm your understanding. Also, the Tour has some exercises that will poke your thoughts, I highly advise trying these out! This will not just push you to comprehend the concepts, but it will also expand your horizons for the use cases that you might need these advanced features.

It would be great fun and value if you can go back to your pet project and try to implement some of these advanced concepts, and this is what I did. I went back to my application and utilised goroutines to extract the data to the markdown files.

Unit Testing

Leaving unit tests to the end wasn’t undermining their value, rather I wanted to focus on the language itself first; test frameworks and push the complexity and the learning curve high enough. My experience from JavaScript stings until now :P.

The Best of Go

Finally, Go website has a section called “Effective Go“. This section is not really a referential documentation, but it can be very valuable so that you write the Go code as the language has intended it to be like. It provides further context and rounded styling to writing the language in the best form.

I also here advise to pick and choose the topics, reading the whole thing might be counter-productive.

Close the Loop, Complete the Picture

By now you’d think you finished, but this is just the beginning; now is the time to tie things together by revising the language’s main characteristics, philosophy, and the greatest advantages.

If we look specifically at Go, as our example, this might be things like the simplicity of Go, where there no classes, no inheritance, or generics. Or things like concurrency and how Go deals with State in asynchronous code execution. At this stage, it will be valuable to check the videos, like Sameer Ajmani’s talk, and the literature out there that discuss “Why Go“.

I also found the FAQ in a valuable resource for some of the justifications and explanations. You should not read this as an article though, pick and choose the topics of interest.

But isn’t this backward? Shouldn’t I learn about these things at the beginning? True, you can learn these at the beginning, but you will not value the claims until you try and put your hands on the problem in practice, until then it will be merely claims in the air. So even if you start with these, you should also revise them and make sure you tie the loop.


In my journey to learn Go, I did the following:
• I had a good reason
• I established the core concepts
• I installed the tools and ran the “hello world” program
• I scanned through the structure flow
• I put my hands on the code and wrote the first program
• Read the advanced topics, and used the playground to confirm my understanding
• Watched more videos on why to use Go and its advantages

It’s important to say here that choosing a language to adopt for in an organisation involves more than just learning it. If you are in a position to influence a decision just be mindful of that.

I hopes this helps you out, enjoying coding :).

RBAC in Azure Kubernetes Service AKS on Twitch!

tldr; I will be streaming on Twitch next Monday (25th of March) at 8:30 Melbourne time (GMT+11), configuring Azure Kubernetes AKS to use RBAC.

Twitch logo

For a long while, I’ve been thinking about streaming live development to Twitch or YouTube. Having spent some time behind the microphone while making DotNetArabi podcast, I can say there is a satisfiying feeling in producing content in a media format through which you can connect with the audience.

Why not just offline video?

I could just record an offline video and host it on YouTube, and it’s definitely a valuable medium. The problem with educational videos, specifically, is that it is a one-way communication channel, and without the entertainment factor, unlike movies, these videos can be daunting, imprisoning, and hard to follow.

The magic of live streaming

But with live streaming magic happens; it adds additional dimensions that make it more appealing:

  1. It’s LIVE! It’s happening NOW, and this means couple of things: it implicitly has the anticipation factor; things are still happening and it might take interesting turns, just like live sports. In addition to that, by sharing the time span during which the event is happening, the audience gets the feeling of involvement and “I was there when it happened”, even if the audience didn’t directly interact with the broadcaster.
  2. It’s real and revealing: When I was doing my homework preparing for this, I talked to my colleague Thomas Koster, and when I asked him about what could interest him in live streaming, his answer was:
    …it’s probably more the real time nature of it that appeals – to see somebody’s thought processes in action, as long as the broadcaster doesn’t waste too much time going around in circles.
    For example, watching somebody figure out a puzzle solution in the game The Witness in real time is much more interesting and valuable than watching a rehearsed, prepared performance of only the final solution.

    This is the ultimate stage for a developer broadcaster; it requires a lot of bravery and experience. I’d love to be able to do this soon, but it’s really the 3rd reason below that drew me to streaming.

  3. It’s two-way communication: the interactive communication between the broadcaster and the audience brings the video to life. It provides timely opportunity to get the best out of this communication, whether it was by the audience correcting the broadcaster, or the broadcaster being available for immediate inquiries.

Specifically for this last reason, I became interested in live streaming; I want this relation with my audience; to have a collaborative experience where value is coming from everyone and going in all directions.

So, I am doing my first stream!

I have been following Jeff Fritz @csharpfritz and Suz Hinton @noopkat and greatly inspired by their amazing work! Also @geoffreyhuntley have started his journey and gave me the last nudge to jump into this space. I’ve learned a lot from Suz’s post “Lessons from my first year of live coding on Twitch“, and recently Jeff’s “Live Streaming Setup – 2019 Edition” (don’t let it scare you,  you don’t have to do it all!).

My next stream will be about Role Based Access Control (RBAC) in Azure Kubernetes AKS, I will walk you through RBAC, OAuth2 Device Flow, and how this works within Azure AKS, with hands-on live deployments and configuration.

What is my goal, and what is not?

What I am trying to achieve here is two-way communication through the session I have with my audience, that’s it.

Am I going to do this constantly now?

Actually, I don’t know! To me this is an experiment; I might keep doing it, or this might be my first AND LAST stream, let’s see what the future brings. 🙂

Fix “Mixed Content” When Using Cloudflare SSL And IIS Rewrites

In this post, I explain how I fixed the “mixed content” security issue when using Cloudflare Flexible SSL, and IIS Rewrite.

I Run Two Websites Under One Account Using IIS Rewrites

I have two websites that are hosted under one account with my hosting provider (I know!): and The way I do it is that is by using IIS Rewrite rules in my web.config; any request that is targeting one of these domains, I “rewrite” the URL so it is pointing to the sub-directory to serve the request. This changes where the file is served from, but does not change the request URL to the user.

However, if by any chance a request came to the server targeting the sub-directory itself,  that page will still be served as is, which is not desirable as I don’t want to expose the inner of my websites; it’s ugly and bad for my websites’ URL discovery. In this case, first I want to “redirect” the user to point to the domain without the sub-directory; and then run the rewrite rule as mentioned above, which I did.

In psudo, when a request comes the execution of the rules looks like this:

  1. Rule1: Does the URL include a sub-directory? If so then Redirect to the same URL without the sub-directory.
  2. Rule2: The URL does not include the sub-directory, so Rewrite (not Redirect) to the sub-directory.

I want to Serve My Websites Over HTTPS, But…

Now when I wanted to secure my websites and start using HTTPS to serve requests, thanks to Troy Hunt’s continuous nagging :P, I couldn’t just use normal certs with my hosting due to the way I am running it. So again, based on Troy Hunt’s awareness efforts, I used Cloudflare’s Flexible SSL free service.

This went fine until I discovered that engine of dotnetarabi generated guests images’s URLs including the sub-directory. When I open dotnetarabi over HTTP, the first request to these URLs is HTTPS, but of course containing the sub-directory, the second request though (which is a redirect to the URL without the sub-directory) is always coming back as HTTP! This caused the known “unsecure; mixed content” problem.

Simply, the reason is that:

  1. With Flexible SSL, Cloudflare communicates to your server view HTTP ALWAYS; you don’t have certs, this is why you need them in the first place!
  2. Cloudflare Flexible SSL doesn’t force HTTPS if you haven’t explicitly asked it to (via the Always Use HTTPS option). So if the request came view HTTP, it will pass it through as HTTP.

So in the the case of my redirects above, what happens is the following:

  1. The request comes to Cloudflare via HTTPS, the URL include the sub-directory
  2. The request is forwarded to my server via HTTP (NOT HTTPS!) to the sub-directory
  3. My server innocently redirects the request to the URL without the sub-directory, but using the same protocol the current request is using, which is HTTP because it will always be!
  4. The user receives the redirection to the new URL, but with the HTTP protocol this time, and then Cloudflare just passes it through because it does not force HTTPS.

The solution

The trick was that it’s true that Cloudflare does not use HTTPS when it forwards the request to your server, but what it does is that it adds the header X-FORWARDED-PROTO=https to the requests to your server if the original request was using HTTPS.

So, all what I needed to do is to check on this header in my redirects; if it exists then redirect to HTTPS, otherwise redirect to HTTP:

The Action part of my rule:

<action type="Redirect" url="{MapSSL:{HTTP_X_FORWARDED_PROTO}}{C:1}" appendQueryString="true" logRewrittenUrl="false" />
  <rewriteMap name="MapSSL" defaultValue="https://">
    <add key="https" value="https://" />
    <add key="http" value="http://" />


HTTP Binding in PowerShell Azure Functions

In a small project, I was trying to utilize an existing PowerShell I had, and host it in Azure Functions; I needed to understand how HTTP binding work with PowerShell Azure Functions as I didn’t want to rewrite my script to C# just because the PowerShell Azure Functions had the “(Preview)” appended to its name.

I wanted the Function to return a plain text response to an HTTP trigger based on a query parameter (this is how Dropbox verifies Webhook URLs). So, naively, I followed the basic template as an example:

Write-Output "PowerShell HTTP function invoked"

if ($req_query_name) 
	$message = "$req_query_name"
	$message = "wrong!"

[io.file]::WriteAllText($res, $message)

The first question I had was “how is the querystring parsed?” I assumed that I should replace “req_query_name” with the querystring key in the request, should I replace the whole thing to become $myQueryParam? This is when I decided to look in the source code rather than the documentation.

Note: I try to link back to the source code wherever I can, the problem is the link does not include the commit ID, so next to the link I put the commit ID at which the file was in that state.

HTTP Binding

There are different phases that take place during a Function execution, in this post I will skip the details of how the binding is loaded, and only concentrate on how the HTTP binding operates within a PowerShell Function.


When the Azure Functions runtime receives an HTTP message for PowerShell script that has HTTP binding, it parses the message according to the following:

  • The body of the HTTP request will be saved to a temp file, the path of the temp file will be assigned to an environment variable that matches the “Name” property of the input binding configuration. If we take the following JSON as an example for our “function.json” configuration, then the name of the variable will be “req“:
       "bindings": [
         "name": "req",
         "type": "httpTrigger",
         "direction": "in",
         "authLevel": "function"
          "name": "res",
          "type": "http",
          "direction": "out"
      "disabled": false

    (This happens here at dcc9e1d)

  • The original URL will be saved in environment variable “REQ_ORIGINAL_URL“.
  • The HTTP request method will be saved in environment variable “REQ_METHOD“.
  • For each HTTP header “key”, a corresponding environment variable “REQ_HEADERS_key” will be created
  • The full querystring will be saved in environment variable “REQ_QUERY“, it will also be further parsed into individual variables; for each query string “key”, a corresponding variable “REQ_QUERY_key” will be created.

All of this happen before the execution of the Function, so once the Function is invoked these variables are already available for consumption. (This happens here at dcc9e1d ).

To read the body of the request you just read it as you read any file PowerShell, and then you parse it according to the content; so if the body of the request is JSON you read the file and parse it to JSON like the following:

$mycontent = Get-Content $req | ConvertFrom-Json

Note: If the Function is executing because of a Triggered bindings (such as HTTP), the rest of the input bindings are skipped. (Check the code here at commit dcc9e1d)


Similar to the request, your script should write the response to a file, which in turn will be read by the Azure Functions runtime, and then will pass it to the HTTP output binding to send it on your behalf . The runtime will also assign the path of this file to an environment variable that  matches the Name property you define in the output binding in the function.json.

So for the example above of function.json, you will write the content of your response to the file whose path is stored in “res”:

[io.file]::WriteAllText($res, $message)

This happens here at commit dcc9e1d.

Default Behaviour

Now, if the content you write to the file is a string that cannot be parsed to JSON, then: it will be considered as the body of the HtttpMessage,  the response will have the default HTTP content-type “application/json”, and it will be run through the default MediaTypeFormatter. Take the following as an example:


 $message = "This is a text"


Content-Type: application\json

"this is a text"

Notice that the text written to the file in the script is without quotes, but the result in the response body is in double quotes; this is because the default content-type of the response is “application/json”, and the HTTP binding will format it accordingly and wrapp in double quotes.

More Control

If we want more control over the response then you have to write JSON object to the file, this JSON object will hold all the information on how the response should look like: the headers, the body, and the response status.

The JSON object should contain the properties: “body“, “headers“, “isRaw” (more about it below), and “statusCode” (int) if you want to change any. For example, if I want the content of the response to be simple text with plain/text content-type , then the script should write the following:

$message = "{ `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"$name`"}"

There are several points that need to be brought up:

  1. If the “body” property exists, then only the value of the “body” property will be in the HttpMessage body, otherwise the whole content of the JSON object will be in the HttpMessage body.
  2. Up until the time of writing this post, Azure PowerShell functions runs under PowerShell 4.0, this means that if you want to use the Out-File command to write to the file, then it will always append a new line feed (\r\n) at the end of the string, even if you supply the -NoNewLine parameter! Use the WriteAllText command instead.

The parsing can be found here at commit 3b3e8cb.


Great, so far we managed to change the body, the headers (including the content-type), and the status of the response. But this is also not enough; depending on the content-type header, the Azure Functions runtime will find the right MediaFormatter for the content and format the response body with the right format.

There are several types of MediaFormatters in the System.Net.Http.Formatting library: JsonMediaTypeFormatter, FormUrlMediaFormatter, XmlMediaTypeFormatter, and others. The issue with the formatters is that it might add the UTF-8 Byte Order Mark (BOM) at the beginning of the content. If the recipient is not ready for this it might cause a problem.

Dropbox, for example, provides a way to watch the changes to a file through their API by registering a webhook, and the way Dropbox verifies the webhook is by making a request to the endpoint with a specific querystring, then it expects the webhook to respond by echoing the querystring back. When I created my Function I didn’t change anything, thus the runtime used the default formatter and appended the UTF-8 BOM characters (0xEF,0xBB,0xBF) to the beginning of the body, which of course was revoked by Dropbox.

The way to skip these formatters is by setting the “isRaw” property mentioned above to true. For example, the following script will write a plain text “emad1234” to the response:

$message = "{ `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"emad1234`" }"

Taking a screenshot from Fiddler from the HexView view, the response look like this:


BOM characters in response of PowerShell Azure Function

Have you noticed the characters I surrounded with the red box? that’s the BOM, displayed as ““.

But once we add the “isRaw” property like this:

$message = "{ `"isRaw`": true, `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"emad1234`" }"

The result will be without the BOM:


This can be found here at commit 3b3e8cb.


Final Notes

It’s worth mentioning that Azure Functions runtime also provides content-negotiation feature, and you can leave it to the request to decide.

Another departing thought is that of course you don’t have to craft your JSON object by concatenating strings together, you can use PowerShell arrays and hashtables to do that, check the articles here and here.

Finally, isn’t it awesome to be able to see that in the source code!


PowerShell probably is the language that got the least love from the Azure Functions team, but this does not mean that you throw your scripts away, hopefully with the tips in this post you will find a way to use them again.

مساعدة في دوت نت عربي

نشأ دوت نت عربي منذ ثمان سنوات ليكون من أوائل المواقع العربية التي تقدم محتوى عربيا ذا جودة عالية، قدم من خلالها العديد من الحلقات مع نجوم تقنيين عرب أصحاب خبرة طويلة و أداء مميز. بدأ البودكاست بجهود فردية و نفقة شخصية غير ربحية، و استمر عدة سنوات بأداء جيد و بمعدل حلقة كل أربعة أسابيع و بشكل مستمر.

و لكن خلال السنتين السابقتين بدأ إصدار الحلقات بالتباطئ و باتت الفترة بين الحلقة و الأخرى تطول على الرغم من كل محاولات زيادة الانتاج. فكرة إخراج العمل من دائرة العمل الفردي إلى دائرة العمل الجماعي لم تغب عني و منذ سنوات، لكن لم أستطع إيجاد آلية واضحة و عملية يمكن الاعتماد عليها لتحويل العمل من فردي إلى جماعي تطوعي، و يمكن من خلالها اغتنام ما قدمه بعض المستمعين المخلصين من رغبة في المشاركة في هذا العمل. استمر الأمر كما هو عليه حتى كان لا بد من الخوض في فكرة العمل الجماعي التطوعي حتى و لو بأبسط الأدوات. و “أن تأت متأخرا خيرا من أن لا تأت”.

و بناء على هذا، و بناء على استشارة بعض الأصدقاء و الأصحاب، أود أن أفتح باب المشاركة في دوت نت عربي لإنتاج الحلقات بشكل أسرع و بجودة عالية. لتسهيل عملية المشاركة لا بد من شرح عملية إنتاج الحلقات و سرد الخطوات، و بالتالي سيسهل على المتطوع اختيار ما يمكن المساهمة به.

خطوات الإنتاج

أولا: إيجاد الضيف المناسب

في هذه الخطوة أقوم بالبحث عن ضيف مناسب للبرنامج. يتطلب من الضيف أن يكون صاحب خبرة في مجاله، و الطرق المتاحة لإثبات هذا هي:
• البحث عن إصدارات و منشورات للضيف مثل مدونة أو مقالات ذات جودة عالية.
• البحث عن مساهمات للضيف على موقع GitHub.
• حيازته على منصب تقني متقدم في شركته
• أو أن يتم التدليل عليه من شخص موثوق بشكل مباشر
لا بد من التنويه هنا أننا لا نحصر الحرفية في من حاز هذه المناصب أو الإنجازات، فهناك الكثير من المحترفين الذين لم تسنح لهم الفرصة للقيام بهذه الأعمال، لكن بالنسبة لدوت نت عربي هذه هي الطريقة المتاحة للتأكد من قدرة الضيف.

فمن يرغب بالتطوع لهذه المهمة سيقوم بالبحث عن الضيف و من ثم سيطلعني على بعض الروابط التي وجدها التي تسرد إنجازات الضيف.

يجدر بالذكر أن هذه الخطوة مفتوحة للجميع دون الحاجة لتنسيق.

ثانيا: ترتيب الموعد

في هذه الخطوة أقوم بالاتصال بالضيف و إخباره عن دوت نت عربي، و أعرض عليه بتسجيل حلقة معه. إن قبل الضيف نشرع بترتيب موعد لتسجيل الحلقة و طرح النقاط المحورية في الحلقة المرتقبة.

ثالثا: تسجيل الحلقة

في هذه الخطوة يتم تسجيل الحلقة مع الضيف من خلال سكايب Skype.

في الخطوتين السابقتين: “ترتيب الموعد” و “تسجيل الحلقة” أظنه من الصعب أن يقوم بهذه الخطوة غير مقدم البرنامج.

رابعا: الإنتاج الفني

بعد تسجيل الحلقة ينتج ملف صوتي MP3 يحتاج لمعالجة، و هي تتضمن ما يلي:
• قص المقاطع التي فيها أخطاء و عثرات و إطالات غير مرغوبة مثل: “آااااا”
• تحسين جودة الصوت من خلال تصفيته بالمصفيات الصوتية التقنية
• إنشاء ملف MP3 جديد و تعديل خصائصه مثل عنوان الملف، و الأيقونة، و غيرها.

تتطلب هذه الخطوة بعض الفنيات، لا تحتاج الكثير من العلم لكن تحتاج إلى الممارسة. و لذلك فإنه من المتوقع أن أقوم بتدريب المتطوع على كيفية الإنتاج، و أن تتم مراجعة الحلقات الأولى بشكل دقيق قبل تسليم المهمة بشكل تام.

خامسا: نشر الحلقة

تتضمن هذه الخطوة رفع ملف الـ MP3 إلى الموقع، و كتابة المقدمة عل الموقع، و نشر الخبر على مواقع التواصل الاجتماعي. و هذه الخطوة أيضا تتضمن بعض المعلومات التقنية، سأساعد من يتقدم لهذه المهمة في البداية بالتأكيد.

و بهذه الخطوات الخمس تتم الحلقة و يبدأ المشوار بحلقة أخرى.

آلية التعاون

سيسرد المتطوع المهام التي يرغب بالتطوع لها، و قد يتقدم لنفس المهمة عدد من المتطوعين. بناء على ذلك سيكون لكل حلقة تنسيق مختلف يعتمد على جدول المتطوع و قدرته على توفير الوقت. الأداة التي اخترتها لتنسيق هذه الخطوات بين المتطوعين هي تريللو و هي أداة مبنية على فكرة ما يقال له “لوح كانبان Kanban Board” حيث سيكون لكل حلقة بطاقة تنتقل بين الخطوات التي ستمثل على شكل عمدان على هذا اللوح.

سيتاح لكل متطوع التقاط بطاقة في عامود خطوة معينة يرغب في العمل بها، سيسندها لنفسه حتى إنهاء العملية ثم يدفعها لعامود الخطوة التالية، و هكذا.

“ما الفائدة التي سأحصل عليها إن تطوعت؟”

و قد يسأل سائل: “ما الفائدة التي سأحصل عليها إن تطوعت؟”، إضافة إلى إسهامك في تنمية معلومات الآخرين و إثراء المحتوى العربي على الإنترنت، سيتم شكر كل من يتطوع للمشاركة في هذا العمل، و بما أن دوت نت عربي ليس مؤسسة ربحية سيكون الشكر بذكر كل من شارك في إصدار الحلقة في ملخص الحلقة على الموقع.

ماذا الآن؟

إذا كنت ترغب في المشاركة في إنتاج حلقات دوت نت عربي أرسل رسالة إلى: “emad.ashi” على بريد الـجيميل GMail، و سيتم الترتيب معك و شرح ما لم يتسن شرحه في هذا المقال. و إن لم ترغب في المشاركة و كان لديك أي نصيحة أو تعليق أو نقد فرجاء لا تترد بإرساله أيضا.

شكرا لكم على اهتمامكم و لنبق على اتصال.

Productivity Satisfaction Maturity Levels

Such a fancy title ha, probably the influence of our industry (bad influence)! Well you can just substitute it with something like “These are the stages of productivity between which the satisfaction jumps in exponential magnitudes”.

Note: before we check these stages out, it goes without saying that all the “he” in this article are absolutely replaceable with “she”; it’s just that the “he/she” style is too verbose.

0. Ignorance

In this stage the individual doesn’t know what he is missing , he does not add any value to himself or the community; he enjoys “time-waste” activities, or watching TV YouTube. Indeed there is joy in being a couch potato, but it is negligible compared to the next levels, which he hasn’t experienced yet, thus explains why I gave it the number 0.

Note that I am not talking about planned recreation activities after productive accomplishments, I am talking here about this kind of activity as being THE activity the individual’s time is mostly spent on.

Also note that I am not trying to degrade anyone here; people might be in this stage due to circumstances out of their control, or because they haven’t tasted the satisfaction of the next levels.

1. Knowing

In this stage the individual learns something new; he watches documentaries, reads books, …etc., the satisfaction of “knowing” tingles the brain with every new piece of knowledge acquired. It’s an intrinsic part of humans’ nature as beings of intellectual.

This is where the majority of people are, and usually stuck; the number of books read become the gauge of the individual’s pride, not the utilization of the value gained from reading these books.

2. Sharing

Reading books is not enough at this stage; there is an overflow of excitement that is spilling over and around, the minute he sees the others’ reactions when he shares the knowledge the satisfaction doubles, he would look for every occasion at which he can cultivate the excitement of passing the knowledge.

Nonetheless, it’s important to understand sharing knowledge at this level is limited to one-to-one interactions, maximum to a group of friends on a hangout.

3. Doing

The individual has read about his favorite topic too much, he also talked about it to others a lot, e.g. he loves carpentry, he loves reading about it, visiting galleries, appreciating carpenters at work,… now what? He starts doing; he takes the first step in transforming this knowledge to action: he buys the tools, and he starts working on his first piece.
He also discovers how difficult it is, he might hit some frustrations, but he keeps going on small but steady steps, until he creates his first piece! Once he finishes, the satisfaction is indescribable! He keeps looking at it, in his mind it echoes: “this is me, I did this!”, “this piece of art didn’t exist before I started working it”, “this solution solves that problem I had”, “I added value”.

This phase, though, is very difficult to step into, and there are several reasons why:

  • It’s not easy to be discovered; the majority of people are not doing it, it doesn’t occur to him that there is more than sharing, and that there is a greater satisfaction from just knowing.
  • Lack of self-confidence: even if it occurs to him that doing could be much more satisfactory, he does not have the confidence in himself to take the action.
  • Doing can be difficult, expensive, and can require effort and sacrifice. It’s not always easy to do depending your circumstances, or the field you are in, e.g. programming; it’s definitely more accessible to start an Open Source project than be involved in nuclear physics lab to try something out.

Being the most difficult stage to get into, I have to stop here, give a little push and help if I can. I tell you with a very loud and clear slow voice: “IF YOU ARE NOT DOING, YOU ARE MISSING!”, and I am not going to try the “stop procrastinating” or “just do it” style, it’s up to you but you are missing a lot! When you decide between flipping through a game on your mobile, or opening your development IDE, remember that you are giving away a joy that is in magnitudes greater than the joy of playing a game of Sudoku.

4. Influencing

He did, and did , and did more, now he starts presenting in User Groups, he writes about it in his blog, and he teaches it. He thought that the ultimate joy was by doing, but he was wrong; he started seeing others doing because he showed them the path, because he helped, because he provided so much value that it started influencing others to do and add value themselves…BOOM! New level of joy.

This also gives him a boost of endurance and patience to support others; he is happy when he receives an inquiry email or when someone approaches his desk with a consultancy. The success of others become his success.

5. Scaling

What can be after influencing? I can only assume Scaling: in this stage, he probably wrote a book, or became an thought leader, or an international speaker, now he is a public figure. And no no…it’s not the fame I am talking about, it is the notion of the unquantified accumulation of values he added to so many people, a value so big in momentum that brings satisfaction and joy equal to the sum of all of the satisfaction and joy he brought to people by his influence. He bumps into people he never met and they thank him for what he did to them!

Finally, remember that learning never stops, check which stage you are at, and know that there is much more satisfaction in the next, in a nutshell: satisfaction is just a synonym to adding a value.

I Have Been Hacked!

Yes, I’ve been hacked, and it wasn’t fun! In this post I will go through some of the lessons learned. But before that, let’s shed some light on what happened.

It began when a friend of mine notified me that my DotNetArabi blog, which is WordPress blog, has new suspicious and unrelated posts. I rushed to my admin page, deleted these posts, and then changed my password to a stronger one.

I wasn’t so much afraid of the impact; after all this is an Arabic podcast blog while the posts were English. In addition to that, most likely the audience who saw these posts are few (since the posts were recent), and those who saw it would excuse me and understand that something went wrong (I like my audience :P).

After deleting these posts I also thought maybe I should check my folders and files, and indeed when I did, I found that there are hundreds and hundreds of files that aren’t part of WordPress files, most of them created in a single day. Deleting these wasn’t as easy as deleting the posts though; they were many files, they were in different folders, I didn’t know all the WordPress files to distinguish them from these files, my host provider does not provide file management system, and the files didn’t have much in common to find a single rule to delete them by (maybe the date was a good indicative, but wasn’t good enough).

Fair enough, since the harm is quarantined for now (or so I thought!), I decided to take this task on ease by deleting these files in bunches, this decision was also influenced by the fact that FileZilla kept disconnecting; I couldn’t just select many suspicious files and delete them.

Days pass by and I receive an email from my host provider informing me that I have been a victim to a hack; the email listed couple of files as a sample of many files (_the_ files) that are sending spam to others. I already knew about the files, but I didn’t know about the “sending spam” part, of course I should have known better; why would these files exist in the first place?! Duh!

Anyway, my host provider urged me to take action but he didn’t mention any thing about taking measures if I don’t, so I kept doing what I was doing: deleting files on ease, even though that I have received probably another same email or two from my host provider.

A week or so after, my Google Analytics numbers flattened to 0! being lazy (actually I was in the middle of moving houses so I shouldn’t bash myself here :P) I didn’t check what the reason was; I thought I can check it in couple of days, maybe it was the mobile app I am using to read my analytics rather than the analytics themselves.

And then  a different email reaches my inbox: “your website have been suspended for the last 3 days because it’s been a source of spam”! This is when I freaked out; it’s true that I don’t make money of the hits to my blog, but being down for that long is bad bad bad for reputation.

I instantly sent them an email explaining to them how angry I was because of their inadequate notification/action protocol; their initial notifications didn’t mention any threat of closing down the website, and their last notification of closing down the website came 4 days after they have closed it down!

I demanded them to put it up again ASAP, but I also promised to remove the malicious files. They refused! No go live again before we delete all the files.

Being under the pressure, I had to try all sorts of stuff, to the extent that I tried the Windows Explorer’s built-in FTP client, and to my surprise, it worked better than FileZilla! I was happy seeing that green progress bar deleting all these awful files. After I made sure I have deleted everything that looked suspicious to me, I sent the host provider an email again informing them that everything is fine now and my website is ready to go up again (yes, they don’t have chat-support, only email).

Hours and hours later, I receive an email from them again saying that I still have malicious files and “Here is a sample”, the website will not be up until this is solved. This time, though, they provided me with two options: either deleting the whole website and uploading from a backup I have (which is potentially infected as well), or pay for a service on hourly basis to fix the problem for me.

I decided to go with the first option first, but rather than deleting the whole website, I asked them to delete the suspicious folder only. Hours and hours after we managed to do this, and finally my website is up again (I went through more problems after that but maybe we can save this for the list of lessons below).

Not a short story looking at the narration above, now let’s look into the lessons learned and how I can relate things together.

You have a website? You are already a target

Security hasn’t been something I neglected, but it was something that I miscalculated; the hacked part of my website was my podcast DotNetArabi’s blog, and my thinking has always been “Why would someone hack my podcast blog? My audience is very specific; it does not host any sensitive information, the ROI of hacking it is little compared to other sites…, so the possibility of being a victim of hacking is very minimal.

But they weren’t after my website, the content, or my audience; they were after the resources on which my website runs on! My website became a platform to annoy others. I agree, I should’ve known better, but the comfort of not doing a lot to secure my website along with the “low possibility” of being a target made me feel good about not securing my website!

Do you have a website that you manage? GO SECURE IT NOW!! Do all what is necessary to secure it, if it is a WordPress blog check the points below, if not look how to secure it. YOU ARE A TARGET…RUN… NOW!

Don’t be Lazy

One of the reasons why I ended up in a bad situation is that I was a little lazy; I know I was moving houses and was too busy, but I also knew about having the malicious files before, and I took it easy, tsk tsk tsk Emad, bad!

Windows Explorer’s FTP client VS FileZilla

For a long time I looked down to Windows Explorer’s FTP client, especially if compared to products that have been in the market for a long time like FileZilla. To my surprise, for the specific task of deleting files, WE’s FTP client out-performed FileZilla; no disconnections at all. If deleting files wasn’t so difficult task due to the bad tool, I might have been in a better position.

Don’t put all your eggs in one basket

I have one site account with my host in which I put 3 websites; the resources these websites need were really minimal so I just created sub folders and created a web app in each folder: one for my personal blog, one for my DotNetArabi podcast, and a blog for the same podcast. This was made possible by some URL Rewriting tricks.

The plague didn’t hit all of them, it only hit the blog of the podcast, but when the host decided to take the website down it took them all simply because to my host it’s a single website.

Regardless of my host’s decision to take the website down, there are so many things that can go wrong to a website which might affect all the subsites. Separation is good in this case.

Manage your backups

Like I said, I had 3 websites with 3 folders, and so I didn’t manage the backup by the entirety of the website, instead I managed the backups separately. Makes sense? Well, I also had a web.config in the root in which I laid the URL rewriting rules, without which the internal links to my blog posts will be broken (shout out to Maher for his help and notifications). And you guessed right my dear reader, I didn’t backup this one up, in fact I did back it up, but by mere coincidence! *slaps self’s hand*. So make sure you backup your website entirely.

Also, I thought I knew where my backups were, I was wrong! I was disappointed that I had to look for my backups! Are they in the external drive? Are they on my personal computer? Are they in my personal VM on my work computer?

Your host’s influence

This is very important; let’s see:

  • Communication: It was good of my host to notify my of the hack, but also they didn’t give me a clear message on what I should specifically do, and the potential outcomes if I didn’t. Instead of sending me sample files of those malicious files, they could have sent me a list of all the malicious files, saving me (and them) the time and effort to look these up. I can hear you say that this is not their problem, but considering the wasted effort and time they had to give away by the back and forth communication, and spam inflicting their servers …due to all that I reckon it was better if they had just sent me the list of all files.
    Also, they didn’t make it clear that they will shut me down if I don’t delete these files on timely manner, if they did I would have been more active and keen to delete them. My impression was that the effect of these files was minimal.
  • Response Time: my host does not provide chat support, only email; this meant long latency before we could cooperate and solve the problem. Especially the notification of putting my website down after 3 days.
  • To their credit, in their last email after the problem was solved, they suggested couple of points on how to secure a WordPress blog; nothing fancy or detailed, but it was good of them, I guess.

Use scan service?

I deliberately put a question mark at the end of this title; I am not sure how good such services are, my host advised me to use sitelock, but don’t consider this as an advice as I haven’t tried it yet; I just think it’s worth mentioning here.

Securing WordPress

There are numerous content on the web talking about securing WordPress blog, here is one. But without being too sophisticated, this most important things to do:

  • Make sure that the engine is up to date
  • Make sure the plugins are up to date
  • Make sure you use a strong password
  • FTP access: to be able to upload media content to your blog you might need to provide an FTP access (if the installation didn’t do that). If you are hosting your WordPress on Linux, DO NOT GIVE 777 permission!


It was all about me belittling the possibility of being hacked! So let me ask this again: do you have a website? You are already a target, don’t be lazy and go secure it NOW!

“Cloud-Ready Web Apps With ASP.NET 5” – Ignite Australia

It was a wonderful week last week spent in the beautiful Gold Coast after a very interesting Microsoft Ignite conference. I got the opportunity to present on how ASP.NET 5 is designed to be suitable for being hosted on the cloud, the following is the recording of my session:

If you missed the event you can catchup with recordings of the sessions on channel 9, videos are still being uploaded.