Author Archives: Emad Alashi

HTTP Binding in PowerShell Azure Functions

In a small project, I was trying to utilize an existing PowerShell I had, and host it in Azure Functions; I needed to understand how HTTP binding work with PowerShell Azure Functions as I didn’t want to rewrite my script to C# just because the PowerShell Azure Functions had the “(Preview)” appended to its name.

I wanted the Function to return a plain text response to an HTTP trigger based on a query parameter (this is how Dropbox verifies Webhook URLs). So, naively, I followed the basic template as an example:

Write-Output "PowerShell HTTP function invoked"

if ($req_query_name) 
	$message = "$req_query_name"
	$message = "wrong!"

[io.file]::WriteAllText($res, $message)

The first question I had was “how is the querystring parsed?” I assumed that I should replace “req_query_name” with the querystring key in the request, should I replace the whole thing to become $myQueryParam? This is when I decided to look in the source code rather than the documentation.

Note: I try to link back to the source code wherever I can, the problem is the link does not include the commit ID, so next to the link I put the commit ID at which the file was in that state.

HTTP Binding

There are different phases that take place during a Function execution, in this post I will skip the details of how the binding is loaded, and only concentrate on how the HTTP binding operates within a PowerShell Function.


When the Azure Functions runtime receives an HTTP message for PowerShell script that has HTTP binding, it parses the message according to the following:

  • The body of the HTTP request will be saved to a temp file, the path of the temp file will be assigned to an environment variable that matches the “Name” property of the input binding configuration. If we take the following JSON as an example for our “function.json” configuration, then the name of the variable will be “req“:
       "bindings": [
         "name": "req",
         "type": "httpTrigger",
         "direction": "in",
         "authLevel": "function"
          "name": "res",
          "type": "http",
          "direction": "out"
      "disabled": false

    (This happens here at dcc9e1d)

  • The original URL will be saved in environment variable “REQ_ORIGINAL_URL“.
  • The HTTP request method will be saved in environment variable “REQ_METHOD“.
  • For each HTTP header “key”, a corresponding environment variable “REQ_HEADERS_key” will be created
  • The full querystring will be saved in environment variable “REQ_QUERY“, it will also be further parsed into individual variables; for each query string “key”, a corresponding variable “REQ_QUERY_key” will be created.

All of this happen before the execution of the Function, so once the Function is invoked these variables are already available for consumption. (This happens here at dcc9e1d ).

To read the body of the request you just read it as you read any file PowerShell, and then you parse it according to the content; so if the body of the request is JSON you read the file and parse it to JSON like the following:

$mycontent = Get-Content $req | ConvertFrom-Json

Note: If the Function is executing because of a Triggered bindings (such as HTTP), the rest of the input bindings are skipped. (Check the code here at commit dcc9e1d)


Similar to the request, your script should write the response to a file, which in turn will be read by the Azure Functions runtime, and then will pass it to the HTTP output binding to send it on your behalf . The runtime will also assign the path of this file to an environment variable that  matches the Name property you define in the output binding in the function.json.

So for the example above of function.json, you will write the content of your response to the file whose path is stored in “res”:

[io.file]::WriteAllText($res, $message)

This happens here at commit dcc9e1d.

Default Behaviour

Now, if the content you write to the file is a string that cannot be parsed to JSON, then: it will be considered as the body of the HtttpMessage,  the response will have the default HTTP content-type “application/json”, and it will be run through the default MediaTypeFormatter. Take the following as an example:


 $message = "This is a text"


Content-Type: application\json

"this is a text"

Notice that the text written to the file in the script is without quotes, but the result in the response body is in double quotes; this is because the default content-type of the response is “application/json”, and the HTTP binding will format it accordingly and wrapp in double quotes.

More Control

If we want more control over the response then you have to write JSON object to the file, this JSON object will hold all the information on how the response should look like: the headers, the body, and the response status.

The JSON object should contain the properties: “body“, “headers“, “isRaw” (more about it below), and “statusCode” (int) if you want to change any. For example, if I want the content of the response to be simple text with plain/text content-type , then the script should write the following:

$message = "{ `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"$name`"}"

There are several points that need to be brought up:

  1. If the “body” property exists, then only the value of the “body” property will be in the HttpMessage body, otherwise the whole content of the JSON object will be in the HttpMessage body.
  2. Up until the time of writing this post, Azure PowerShell functions runs under PowerShell 4.0, this means that if you want to use the Out-File command to write to the file, then it will always append a new line feed (\r\n) at the end of the string, even if you supply the -NoNewLine parameter! Use the WriteAllText command instead.

The parsing can be found here at commit 3b3e8cb.


Great, so far we managed to change the body, the headers (including the content-type), and the status of the response. But this is also not enough; depending on the content-type header, the Azure Functions runtime will find the right MediaFormatter for the content and format the response body with the right format.

There are several types of MediaFormatters in the System.Net.Http.Formatting library: JsonMediaTypeFormatter, FormUrlMediaFormatter, XmlMediaTypeFormatter, and others. The issue with the formatters is that it might add the UTF-8 Byte Order Mark (BOM) at the beginning of the content. If the recipient is not ready for this it might cause a problem.

Dropbox, for example, provides a way to watch the changes to a file through their API by registering a webhook, and the way Dropbox verifies the webhook is by making a request to the endpoint with a specific querystring, then it expects the webhook to respond by echoing the querystring back. When I created my Function I didn’t change anything, thus the runtime used the default formatter and appended the UTF-8 BOM characters (0xEF,0xBB,0xBF) to the beginning of the body, which of course was revoked by Dropbox.

The way to skip these formatters is by setting the “isRaw” property mentioned above to true. For example, the following script will write a plain text “emad1234” to the response:

$message = "{ `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"emad1234`" }"

Taking a screenshot from Fiddler from the HexView view, the response look like this:


BOM characters in response of PowerShell Azure Function

Have you noticed the characters I surrounded with the red box? that’s the BOM, displayed as ““.

But once we add the “isRaw” property like this:

$message = "{ `"isRaw`": true, `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"emad1234`" }"

The result will be without the BOM:


This can be found here at commit 3b3e8cb.


Final Notes

It’s worth mentioning that Azure Functions runtime also provides content-negotiation feature, and you can leave it to the request to decide.

Another departing thought is that of course you don’t have to craft your JSON object by concatenating strings together, you can use PowerShell arrays and hashtables to do that, check the articles here and here.

Finally, isn’t it awesome to be able to see that in the source code!


PowerShell probably is the language that got the least love from the Azure Functions team, but this does not mean that you throw your scripts away, hopefully with the tips in this post you will find a way to use them again.

مساعدة في دوت نت عربي

نشأ دوت نت عربي منذ ثمان سنوات ليكون من أوائل المواقع العربية التي تقدم محتوى عربيا ذا جودة عالية، قدم من خلالها العديد من الحلقات مع نجوم تقنيين عرب أصحاب خبرة طويلة و أداء مميز. بدأ البودكاست بجهود فردية و نفقة شخصية غير ربحية، و استمر عدة سنوات بأداء جيد و بمعدل حلقة كل أربعة أسابيع و بشكل مستمر.

و لكن خلال السنتين السابقتين بدأ إصدار الحلقات بالتباطئ و باتت الفترة بين الحلقة و الأخرى تطول على الرغم من كل محاولات زيادة الانتاج. فكرة إخراج العمل من دائرة العمل الفردي إلى دائرة العمل الجماعي لم تغب عني و منذ سنوات، لكن لم أستطع إيجاد آلية واضحة و عملية يمكن الاعتماد عليها لتحويل العمل من فردي إلى جماعي تطوعي، و يمكن من خلالها اغتنام ما قدمه بعض المستمعين المخلصين من رغبة في المشاركة في هذا العمل. استمر الأمر كما هو عليه حتى كان لا بد من الخوض في فكرة العمل الجماعي التطوعي حتى و لو بأبسط الأدوات. و “أن تأت متأخرا خيرا من أن لا تأت”.

و بناء على هذا، و بناء على استشارة بعض الأصدقاء و الأصحاب، أود أن أفتح باب المشاركة في دوت نت عربي لإنتاج الحلقات بشكل أسرع و بجودة عالية. لتسهيل عملية المشاركة لا بد من شرح عملية إنتاج الحلقات و سرد الخطوات، و بالتالي سيسهل على المتطوع اختيار ما يمكن المساهمة به.

خطوات الإنتاج

أولا: إيجاد الضيف المناسب

في هذه الخطوة أقوم بالبحث عن ضيف مناسب للبرنامج. يتطلب من الضيف أن يكون صاحب خبرة في مجاله، و الطرق المتاحة لإثبات هذا هي:
• البحث عن إصدارات و منشورات للضيف مثل مدونة أو مقالات ذات جودة عالية.
• البحث عن مساهمات للضيف على موقع GitHub.
• حيازته على منصب تقني متقدم في شركته
• أو أن يتم التدليل عليه من شخص موثوق بشكل مباشر
لا بد من التنويه هنا أننا لا نحصر الحرفية في من حاز هذه المناصب أو الإنجازات، فهناك الكثير من المحترفين الذين لم تسنح لهم الفرصة للقيام بهذه الأعمال، لكن بالنسبة لدوت نت عربي هذه هي الطريقة المتاحة للتأكد من قدرة الضيف.

فمن يرغب بالتطوع لهذه المهمة سيقوم بالبحث عن الضيف و من ثم سيطلعني على بعض الروابط التي وجدها التي تسرد إنجازات الضيف.

يجدر بالذكر أن هذه الخطوة مفتوحة للجميع دون الحاجة لتنسيق.

ثانيا: ترتيب الموعد

في هذه الخطوة أقوم بالاتصال بالضيف و إخباره عن دوت نت عربي، و أعرض عليه بتسجيل حلقة معه. إن قبل الضيف نشرع بترتيب موعد لتسجيل الحلقة و طرح النقاط المحورية في الحلقة المرتقبة.

ثالثا: تسجيل الحلقة

في هذه الخطوة يتم تسجيل الحلقة مع الضيف من خلال سكايب Skype.

في الخطوتين السابقتين: “ترتيب الموعد” و “تسجيل الحلقة” أظنه من الصعب أن يقوم بهذه الخطوة غير مقدم البرنامج.

رابعا: الإنتاج الفني

بعد تسجيل الحلقة ينتج ملف صوتي MP3 يحتاج لمعالجة، و هي تتضمن ما يلي:
• قص المقاطع التي فيها أخطاء و عثرات و إطالات غير مرغوبة مثل: “آااااا”
• تحسين جودة الصوت من خلال تصفيته بالمصفيات الصوتية التقنية
• إنشاء ملف MP3 جديد و تعديل خصائصه مثل عنوان الملف، و الأيقونة، و غيرها.

تتطلب هذه الخطوة بعض الفنيات، لا تحتاج الكثير من العلم لكن تحتاج إلى الممارسة. و لذلك فإنه من المتوقع أن أقوم بتدريب المتطوع على كيفية الإنتاج، و أن تتم مراجعة الحلقات الأولى بشكل دقيق قبل تسليم المهمة بشكل تام.

خامسا: نشر الحلقة

تتضمن هذه الخطوة رفع ملف الـ MP3 إلى الموقع، و كتابة المقدمة عل الموقع، و نشر الخبر على مواقع التواصل الاجتماعي. و هذه الخطوة أيضا تتضمن بعض المعلومات التقنية، سأساعد من يتقدم لهذه المهمة في البداية بالتأكيد.

و بهذه الخطوات الخمس تتم الحلقة و يبدأ المشوار بحلقة أخرى.

آلية التعاون

سيسرد المتطوع المهام التي يرغب بالتطوع لها، و قد يتقدم لنفس المهمة عدد من المتطوعين. بناء على ذلك سيكون لكل حلقة تنسيق مختلف يعتمد على جدول المتطوع و قدرته على توفير الوقت. الأداة التي اخترتها لتنسيق هذه الخطوات بين المتطوعين هي تريللو و هي أداة مبنية على فكرة ما يقال له “لوح كانبان Kanban Board” حيث سيكون لكل حلقة بطاقة تنتقل بين الخطوات التي ستمثل على شكل عمدان على هذا اللوح.

سيتاح لكل متطوع التقاط بطاقة في عامود خطوة معينة يرغب في العمل بها، سيسندها لنفسه حتى إنهاء العملية ثم يدفعها لعامود الخطوة التالية، و هكذا.

“ما الفائدة التي سأحصل عليها إن تطوعت؟”

و قد يسأل سائل: “ما الفائدة التي سأحصل عليها إن تطوعت؟”، إضافة إلى إسهامك في تنمية معلومات الآخرين و إثراء المحتوى العربي على الإنترنت، سيتم شكر كل من يتطوع للمشاركة في هذا العمل، و بما أن دوت نت عربي ليس مؤسسة ربحية سيكون الشكر بذكر كل من شارك في إصدار الحلقة في ملخص الحلقة على الموقع.

ماذا الآن؟

إذا كنت ترغب في المشاركة في إنتاج حلقات دوت نت عربي أرسل رسالة إلى: “emad.ashi” على بريد الـجيميل GMail، و سيتم الترتيب معك و شرح ما لم يتسن شرحه في هذا المقال. و إن لم ترغب في المشاركة و كان لديك أي نصيحة أو تعليق أو نقد فرجاء لا تترد بإرساله أيضا.

شكرا لكم على اهتمامكم و لنبق على اتصال.

Productivity Satisfaction Maturity Levels

Such a fancy title ha, probably the influence of our industry (bad influence)! Well you can just substitute it with something like “These are the stages of productivity between which the satisfaction jumps in exponential magnitudes”.

Note: before we check these stages out, it goes without saying that all the “he” in this article are absolutely replaceable with “she”; it’s just that the “he/she” style is too verbose.

0. Ignorance

In this stage the individual doesn’t know what he is missing , he does not add any value to himself or the community; he enjoys “time-waste” activities, or watching TV YouTube. Indeed there is joy in being a couch potato, but it is negligible compared to the next levels, which he hasn’t experienced yet, thus explains why I gave it the number 0.

Note that I am not talking about planned recreation activities after productive accomplishments, I am talking here about this kind of activity as being THE activity the individual’s time is mostly spent on.

Also note that I am not trying to degrade anyone here; people might be in this stage due to circumstances out of their control, or because they haven’t tasted the satisfaction of the next levels.

1. Knowing

In this stage the individual learns something new; he watches documentaries, reads books, …etc., the satisfaction of “knowing” tingles the brain with every new piece of knowledge acquired. It’s an intrinsic part of humans’ nature as beings of intellectual.

This is where the majority of people are, and usually stuck; the number of books read become the gauge of the individual’s pride, not the utilization of the value gained from reading these books.

2. Sharing

Reading books is not enough at this stage; there is an overflow of excitement that is spilling over and around, the minute he sees the others’ reactions when he shares the knowledge the satisfaction doubles, he would look for every occasion at which he can cultivate the excitement of passing the knowledge.

Nonetheless, it’s important to understand sharing knowledge at this level is limited to one-to-one interactions, maximum to a group of friends on a hangout.

3. Doing

The individual has read about his favorite topic too much, he also talked about it to others a lot, e.g. he loves carpentry, he loves reading about it, visiting galleries, appreciating carpenters at work,… now what? He starts doing; he takes the first step in transforming this knowledge to action: he buys the tools, and he starts working on his first piece.
He also discovers how difficult it is, he might hit some frustrations, but he keeps going on small but steady steps, until he creates his first piece! Once he finishes, the satisfaction is indescribable! He keeps looking at it, in his mind it echoes: “this is me, I did this!”, “this piece of art didn’t exist before I started working it”, “this solution solves that problem I had”, “I added value”.

This phase, though, is very difficult to step into, and there are several reasons why:

  • It’s not easy to be discovered; the majority of people are not doing it, it doesn’t occur to him that there is more than sharing, and that there is a greater satisfaction from just knowing.
  • Lack of self-confidence: even if it occurs to him that doing could be much more satisfactory, he does not have the confidence in himself to take the action.
  • Doing can be difficult, expensive, and can require effort and sacrifice. It’s not always easy to do depending your circumstances, or the field you are in, e.g. programming; it’s definitely more accessible to start an Open Source project than be involved in nuclear physics lab to try something out.

Being the most difficult stage to get into, I have to stop here, give a little push and help if I can. I tell you with a very loud and clear slow voice: “IF YOU ARE NOT DOING, YOU ARE MISSING!”, and I am not going to try the “stop procrastinating” or “just do it” style, it’s up to you but you are missing a lot! When you decide between flipping through a game on your mobile, or opening your development IDE, remember that you are giving away a joy that is in magnitudes greater than the joy of playing a game of Sudoku.

4. Influencing

He did, and did , and did more, now he starts presenting in User Groups, he writes about it in his blog, and he teaches it. He thought that the ultimate joy was by doing, but he was wrong; he started seeing others doing because he showed them the path, because he helped, because he provided so much value that it started influencing others to do and add value themselves…BOOM! New level of joy.

This also gives him a boost of endurance and patience to support others; he is happy when he receives an inquiry email or when someone approaches his desk with a consultancy. The success of others become his success.

5. Scaling

What can be after influencing? I can only assume Scaling: in this stage, he probably wrote a book, or became an thought leader, or an international speaker, now he is a public figure. And no no…it’s not the fame I am talking about, it is the notion of the unquantified accumulation of values he added to so many people, a value so big in momentum that brings satisfaction and joy equal to the sum of all of the satisfaction and joy he brought to people by his influence. He bumps into people he never met and they thank him for what he did to them!

Finally, remember that learning never stops, check which stage you are at, and know that there is much more satisfaction in the next, in a nutshell: satisfaction is just a synonym to adding a value.

I Have Been Hacked!

Yes, I’ve been hacked, and it wasn’t fun! In this post I will go through some of the lessons learned. But before that, let’s shed some light on what happened.

It began when a friend of mine notified me that my DotNetArabi blog, which is WordPress blog, has new suspicious and unrelated posts. I rushed to my admin page, deleted these posts, and then changed my password to a stronger one.

I wasn’t so much afraid of the impact; after all this is an Arabic podcast blog while the posts were English. In addition to that, most likely the audience who saw these posts are few (since the posts were recent), and those who saw it would excuse me and understand that something went wrong (I like my audience :P).

After deleting these posts I also thought maybe I should check my folders and files, and indeed when I did, I found that there are hundreds and hundreds of files that aren’t part of WordPress files, most of them created in a single day. Deleting these wasn’t as easy as deleting the posts though; they were many files, they were in different folders, I didn’t know all the WordPress files to distinguish them from these files, my host provider does not provide file management system, and the files didn’t have much in common to find a single rule to delete them by (maybe the date was a good indicative, but wasn’t good enough).

Fair enough, since the harm is quarantined for now (or so I thought!), I decided to take this task on ease by deleting these files in bunches, this decision was also influenced by the fact that FileZilla kept disconnecting; I couldn’t just select many suspicious files and delete them.

Days pass by and I receive an email from my host provider informing me that I have been a victim to a hack; the email listed couple of files as a sample of many files (_the_ files) that are sending spam to others. I already knew about the files, but I didn’t know about the “sending spam” part, of course I should have known better; why would these files exist in the first place?! Duh!

Anyway, my host provider urged me to take action but he didn’t mention any thing about taking measures if I don’t, so I kept doing what I was doing: deleting files on ease, even though that I have received probably another same email or two from my host provider.

A week or so after, my Google Analytics numbers flattened to 0! being lazy (actually I was in the middle of moving houses so I shouldn’t bash myself here :P) I didn’t check what the reason was; I thought I can check it in couple of days, maybe it was the mobile app I am using to read my analytics rather than the analytics themselves.

And then  a different email reaches my inbox: “your website have been suspended for the last 3 days because it’s been a source of spam”! This is when I freaked out; it’s true that I don’t make money of the hits to my blog, but being down for that long is bad bad bad for reputation.

I instantly sent them an email explaining to them how angry I was because of their inadequate notification/action protocol; their initial notifications didn’t mention any threat of closing down the website, and their last notification of closing down the website came 4 days after they have closed it down!

I demanded them to put it up again ASAP, but I also promised to remove the malicious files. They refused! No go live again before we delete all the files.

Being under the pressure, I had to try all sorts of stuff, to the extent that I tried the Windows Explorer’s built-in FTP client, and to my surprise, it worked better than FileZilla! I was happy seeing that green progress bar deleting all these awful files. After I made sure I have deleted everything that looked suspicious to me, I sent the host provider an email again informing them that everything is fine now and my website is ready to go up again (yes, they don’t have chat-support, only email).

Hours and hours later, I receive an email from them again saying that I still have malicious files and “Here is a sample”, the website will not be up until this is solved. This time, though, they provided me with two options: either deleting the whole website and uploading from a backup I have (which is potentially infected as well), or pay for a service on hourly basis to fix the problem for me.

I decided to go with the first option first, but rather than deleting the whole website, I asked them to delete the suspicious folder only. Hours and hours after we managed to do this, and finally my website is up again (I went through more problems after that but maybe we can save this for the list of lessons below).

Not a short story looking at the narration above, now let’s look into the lessons learned and how I can relate things together.

You have a website? You are already a target

Security hasn’t been something I neglected, but it was something that I miscalculated; the hacked part of my website was my podcast DotNetArabi’s blog, and my thinking has always been “Why would someone hack my podcast blog? My audience is very specific; it does not host any sensitive information, the ROI of hacking it is little compared to other sites…, so the possibility of being a victim of hacking is very minimal.

But they weren’t after my website, the content, or my audience; they were after the resources on which my website runs on! My website became a platform to annoy others. I agree, I should’ve known better, but the comfort of not doing a lot to secure my website along with the “low possibility” of being a target made me feel good about not securing my website!

Do you have a website that you manage? GO SECURE IT NOW!! Do all what is necessary to secure it, if it is a WordPress blog check the points below, if not look how to secure it. YOU ARE A TARGET…RUN… NOW!

Don’t be Lazy

One of the reasons why I ended up in a bad situation is that I was a little lazy; I know I was moving houses and was too busy, but I also knew about having the malicious files before, and I took it easy, tsk tsk tsk Emad, bad!

Windows Explorer’s FTP client VS FileZilla

For a long time I looked down to Windows Explorer’s FTP client, especially if compared to products that have been in the market for a long time like FileZilla. To my surprise, for the specific task of deleting files, WE’s FTP client out-performed FileZilla; no disconnections at all. If deleting files wasn’t so difficult task due to the bad tool, I might have been in a better position.

Don’t put all your eggs in one basket

I have one site account with my host in which I put 3 websites; the resources these websites need were really minimal so I just created sub folders and created a web app in each folder: one for my personal blog, one for my DotNetArabi podcast, and a blog for the same podcast. This was made possible by some URL Rewriting tricks.

The plague didn’t hit all of them, it only hit the blog of the podcast, but when the host decided to take the website down it took them all simply because to my host it’s a single website.

Regardless of my host’s decision to take the website down, there are so many things that can go wrong to a website which might affect all the subsites. Separation is good in this case.

Manage your backups

Like I said, I had 3 websites with 3 folders, and so I didn’t manage the backup by the entirety of the website, instead I managed the backups separately. Makes sense? Well, I also had a web.config in the root in which I laid the URL rewriting rules, without which the internal links to my blog posts will be broken (shout out to Maher for his help and notifications). And you guessed right my dear reader, I didn’t backup this one up, in fact I did back it up, but by mere coincidence! *slaps self’s hand*. So make sure you backup your website entirely.

Also, I thought I knew where my backups were, I was wrong! I was disappointed that I had to look for my backups! Are they in the external drive? Are they on my personal computer? Are they in my personal VM on my work computer?

Your host’s influence

This is very important; let’s see:

  • Communication: It was good of my host to notify my of the hack, but also they didn’t give me a clear message on what I should specifically do, and the potential outcomes if I didn’t. Instead of sending me sample files of those malicious files, they could have sent me a list of all the malicious files, saving me (and them) the time and effort to look these up. I can hear you say that this is not their problem, but considering the wasted effort and time they had to give away by the back and forth communication, and spam inflicting their servers …due to all that I reckon it was better if they had just sent me the list of all files.
    Also, they didn’t make it clear that they will shut me down if I don’t delete these files on timely manner, if they did I would have been more active and keen to delete them. My impression was that the effect of these files was minimal.
  • Response Time: my host does not provide chat support, only email; this meant long latency before we could cooperate and solve the problem. Especially the notification of putting my website down after 3 days.
  • To their credit, in their last email after the problem was solved, they suggested couple of points on how to secure a WordPress blog; nothing fancy or detailed, but it was good of them, I guess.

Use scan service?

I deliberately put a question mark at the end of this title; I am not sure how good such services are, my host advised me to use sitelock, but don’t consider this as an advice as I haven’t tried it yet; I just think it’s worth mentioning here.

Securing WordPress

There are numerous content on the web talking about securing WordPress blog, here is one. But without being too sophisticated, this most important things to do:

  • Make sure that the engine is up to date
  • Make sure the plugins are up to date
  • Make sure you use a strong password
  • FTP access: to be able to upload media content to your blog you might need to provide an FTP access (if the installation didn’t do that). If you are hosting your WordPress on Linux, DO NOT GIVE 777 permission!


It was all about me belittling the possibility of being hacked! So let me ask this again: do you have a website? You are already a target, don’t be lazy and go secure it NOW!

“Cloud-Ready Web Apps With ASP.NET 5” – Ignite Australia

It was a wonderful week last week spent in the beautiful Gold Coast after a very interesting Microsoft Ignite conference. I got the opportunity to present on how ASP.NET 5 is designed to be suitable for being hosted on the cloud, the following is the recording of my session:

If you missed the event you can catchup with recordings of the sessions on channel 9, videos are still being uploaded.

Dependency Injection In ASP.NET 5 – One Step Deeper

Dependency Injection has always been an integral part of all the web frameworks under the umbrella of the ASP.NET: Web API, SignalR, and MVC. But historically, these frameworks evolved separately from each other, hence each of these frameworks had its own way of supporting Dependency Injection, even with Katana‘s trial to bring these frameworks together through OWIN, you still needed to do some hackery to have a unified container that supports them all at once. Well, things have changed!

In this post I will dive a little bit deeper than this MSDN post; here we will examine the main interfaces involved, have a small peek inside on how things are running, and explain what it means really to switch to your IoC container of choice.


The decision the ASP.NET team made was to provide the dependency injection functionality through abstracting the most common features of the most popular IoC containers out there, and then allowing the different Middlewares to interact with these interfaces to achieve dependency injection.
ASPNET5 supplies a basic IoC container that implements these interfaces, but also allows the developer to swap this default implementation with their own implementation, through which they can use the IoC container of choice. Usually this is something that is not going to be implemented by the application developer himself rather than something to be implemented by the IoC container maintainers; people behind Autofac, or Ninject…etc.
Having said that, the ASPNET team has provided a basic implementations for the most common IoC containers, but these implementations are most likely to be revised by the IoC maintainers themselves.

Let’s examine the interfaces, shall we?


This is the main interface, through which the developer will be able to retrieve the implementation of a service he/she previously registered with the container (we will come to registration later). This interface has one method only: GetService(Type), think of container.Resolve<Service>() in Autofac, or kernel.Get<Service>() in Ninject.

All Middlewares will have access to two IServiceProvider instances:

  • Application-level: made available to the Middleware through HttpContext.ApplicationServices property
  • Request-level: made available to the Middleware through the HttpContext.RequestServices property. This scoped ServiceProvider is created for each request at the very beginning of the request pipeline by an implicit Middleware, and of course this request-level Service Provider will be disposed by the same Middleware at the end of the request just before sending the response back.

Note: I agree, the naming of the ApplicationServices and RequestServices properties might be little bit confusing, but just take it as is for now; these are IServiceProvider.

All the Middlewares will use these properties (hopefully the RequestServices only!) to resolve their services, e.g. the ASP.NET MVC Middleware will create the controllers and their dependencies through the RequestServices (if you don’t believe me check the code, it’s open source ;)), the same goes for creating controllers in Web API …etc.


Alright, so we said that the RequestServices Service Provider is a scoped container that will be disposed by the end of the request, but how is this managed? You guessed right, by an IServiceScope.

This interface should be a wrapper around a scoped container, whose role is to dispose the container at the end of the request. So naturally it has:

  • IServiceProvider property: the scoped container
  • Dispose() method: by inheriting the IDisposable interface

The question is, who creates the IServiceScope? This brings us to the 3rd interface.


Very simple interface as well, it has one method CreateServiceScope() which of course returns a IServiceScope.

So if you maintain an IoC container and you want to use it in place of the default one served by default, you have to implement the above mentioned interfaces.
“But Emad, you didn’t talk about registration of services with the container! And how does it fit all together?!”. Patience my friend, let me just finish this section with the last two classes and then we will jump to the registration.


Enum with 3 keys to define the lifetime of services (objects really):

  • Singleton: single instance throughout the whole application
  • Scoped: single instance within the scoped container
  • Transient: a new instance every time the service is requested


Finally, the last class! This class is the construct that will hold all the information the container will use in order to register a service correctly; imagine it saying: “hey you, whichever container you are, when you want to register this service make sure it’s a singleton, and take the implementation from this type”. Fancy? Let’s check the members of interest:

  • ServiceType: a property of type Type, this will be the interface for which you will want to substitute with a concrete implementation, e.g. ISchoolTeacher
  • ImplementationType: a property of type Type, this will be the implementation type of the ServiceType above, e.g. SchoolTeacher
  • Lifetime: The lifetime desired for this service: Singleton, Scoped, or Transient.
  • ImplementationFactory: a Func<IServiceProvider, Object>. In some scenarios, the app developer wishes to provide a factory method to instantiate the concrete implementation of the service; maybe there are factors that are outside of the service’s control that mandates how the service should be created, this property will hold this factory method. And yes, it’s mutually exclusive; if you provide an ImplementationType you don’t provide an ImplementationFactory, and vice versa.
  • ImplementationInstance: so you can provide a type as an implementation, and you can provide a factory method to create the object. You also can provide a specific instance, this property of type Object is to hold this instance. Also should be mutually exclusive with the ImplementationType and ImplementationFactory.

Great, now for your application to run as expected, you will have a list of these ServiceDescriptors that you will hand to your container, and tell it to register these services according to how they are described. So let’s look at how this runs together including the registration part.

Registering Services

Now to register your services, ASPNET5 expects that your Startup class has a method called ConfigureServices, it takes a list of ServiceDescriptors, wrapped in IServiceCollection, and returns nothing (there is another form of this method that we will discuss shortly). All what you have to do is to create ServiceDescriptors for the services you want to register and add them to the list. The web app will be pick this list later and then register it with the container.

public void ConfigureServices(IServiceCollection services)
var serviceDescriptor = new ServiceDescriptor(typeof(IBankManager), typeof(BankManager), ServiceLifetime.Transient);

// Add MVC services to the services container.

Note: Create ServiceDescriptors can be little bit verbose, this is why you see Middleware using Extension methods to create these ServiceDescriptors, like “service.AddMvc()”

So how will this be orchestrated with the application start?

The following pseudo statements explain the server startup and how the Service Provider is created, the corresponding code can be found in the HostingEngine.Start method:

Note: that this post is based on the beta4 version, things have changed since then but the main behavior is the same. So adding the code here won’t add much value; pseudo code should be good enough

  1. Hosting engine will create an IServiceCollection, which is a collection of ServiceDescriptors
  2. Hosting engine will add all the services it needs to the list
  3. Hosting engine will ensure that there is a Startup class in your assembly and that it has a method called ConfigureServices
  4. Hosting engine will load this method and call it passing the IServiceCollection
  5. ConfigureServices in the Startup class will add the apps services to the list
  6. Hosting engine will create a DefaultServiceProvider (the container) and use the information in IServiceCollection to register the services to the DefaultServiceProvider
  7. Hosting engine will create the Application Builder (IApplicationBuilder) and assign the new Service Provider to the property IApplicationBuilder.ApplicationServices so it can use it further down
  8. Hosting engine will add a Middleware before giving the chance for the Startup.Configure to run, placing it to be the first Middleware in the pipeline. The Middleware is RequestServicesContainerMiddleware, which will be discussed shortly.
  9. Hosting engine will call Configure method in Startup class passing the Application Builder to build the Middleware pipeline where the Service Provider can be used through the ApplicationServices property to build the Middleware if needed

Great, the server is configured, started, and ready to receive requests. What happens now in a request? how is the dependency injection is run?

Running a Request

When the request first comes, an HttpContext will be created to be handed to the Invoke method of the first Middleware, and subsequently to all the Middlewares. But just before it’s handed to the first Middleware, the Application Builder’s Service Provider is assigned to the property HttpContext.ApplicationServices, making the application-level Service Provider available through the HttpContext for all the Middleware to use it up to their needs. Though, it should be kept in mind that this is the application-level Service Provider, and depending on the IoC container of choice, your objects might stay alive through the whole lifetime of the application if you use it.

Note: in theory, as an application developer, you should not use the Service Provider directly; if you do then you are doing a Service Locator pattern, which is advised against.

Ok then, that was an application-level Service Provider, isn’t there a Service Provider that is scoped for the lifetime of the request? Yes, there is.

In step 8 in the list above, we mentioned that the hosting engine adds the RequestServicesContaienrMiddleware Middleware at the beginning of the pipeline, giving it the chance to run first.
The code hasn’t change much for this Middleware for a long time, so I think it’s safe to put the code here :)

public async Task Invoke(HttpContext httpContext)
    using (var container = RequestServicesContainer.EnsureRequestServices(httpContext, _services))
        await _next.Invoke(httpContext);

Going back to the request execution, the server creates the HttpContext, assigns the application-level Service Provider to the HttpConext.ApplicationServices, and then invokes the first Middleware, which is the RequestServicesContainerMiddleware. Can you see that using statement in the Invoke method? There where the magic lies; all what it does is that it creates a Scoped Service Provider that will be disposed at the end of the request. The pseudo will be:

  1. Request is handed by RequestServicesContainerMiddleware
  2. Invoke will retrieve an IServiceScopeFactory from the application-level Service Provider via HttpContext.ApplicationServices.
  3. IServiceScopeFactory will create a scoped container (think of ILifetimeScope in Autofac)
  4. The scoped container will be assigned to the property HttpContext.RequestServices
  5. The Invoke method calls the subsequent Middlwares allowing the request to go through
  6. When all the Middlewares are invoked and the call return is back to the RequestServicesContainerMiddleware, the scoped Service Provider will be disposed by the “using” statement.

Note: RequestServicesContainerMiddleware uses a wrapper/helper class RequestServicesContainer to manage the creation and disposition of the scoped Service Provider, which is the object used in the “using” statement really

The HttpContext.RequestServices is the scoped container for the request lifetime, all the subsequent Middleware will have access to it. For example, If you check the MvcRouteHandler.InvokeActionAsync you will see that it’s using it to create the controllers:

private async Task InvokeActionAsync(RouteContext context, ActionDescriptor actionDescriptor)
    var services = context.HttpContext.RequestServices;
    Debug.Assert(services != null);

    var actionContext = new ActionContext(context.HttpContext, context.RouteData, actionDescriptor);

    var optionsAccessor = services.GetRequiredService<IOptions<MvcOptions>>();
    actionContext.ModelState.MaxAllowedErrors = optionsAccessor.Options.MaxModelValidationErrors;

    var contextAccessor = services.GetRequiredService<IScopedInstance<ActionContext>>();
    contextAccessor.Value = actionContext;
    var invokerFactory = services.GetRequiredService<IActionInvokerFactory>();
    var invoker = invokerFactory.CreateInvoker(actionContext);
    if (invoker == null)
        LogActionSelection(actionSelected: true, actionInvoked: false, handled: context.IsHandled);

        throw new InvalidOperationException(

    await invoker.InvokeAsync();

Note: a reminder, again, you shouldn’t need to use the Service Provider directly; try to manifest your dependencies through constructors, avoid the Service Locator pattern.

Awesome, now what if you want to substitute the default container with something like Autofac? Glad you asked, let’s see how.

Bring Your Own IoC Container

Before we start, this is a reminder that this is something to be implemented by the IoC container maintainers, not by the application developer.

To use your own container you have to implement the interfaces: IServiceProvider, IServiceScope, and IServiceScopeFactory. Implementing the interfaces should be straight forward because the interface itself is mandating what you need to do, the Autofac implementation can be used as an example.

But the subtle thing that needs to be explained that the ConfigureServices method in the Startup class has another form that the hosting engine expects, this form is expected in case the developer wants to use his own IoC container. In this form the method should return an IServiceProvider; once all the desired ServiceDescriptors are added to the IServiceCollection, the developer should create his container, register the services the way the container expects it, and then returns the container’s implementation of the IServiceProvider. The following is the code to use Autofac:

public IServiceProvider ConfigureServices(IServiceCollection services)
    // Add MVC services to the services container.

    var builder = new ContainerBuilder();

    // Create the container and use the default application services as a fallback

    var container = builder.Build();

    return container.Resolve<IServiceProvider>();

The AutofacRegistration.Populate registers the services the way Autofac likes, and registers the IServiceScope and IServiceScopeFactory implementations (this is only a part, check the complete code on the link):

private static void Register(
ContainerBuilder builder,
IEnumerable descriptors)
foreach (var descriptor in descriptors)
if (descriptor.ImplementationType != null)
// Test if the an open generic type is being registered
var serviceTypeInfo = descriptor.ServiceType.GetTypeInfo();
if (serviceTypeInfo.IsGenericTypeDefinition)
else if (descriptor.ImplementationFactory != null)
var registration = RegistrationBuilder.ForDelegate(descriptor.ServiceType, (context, parameters) =&gt;
var serviceProvider = context.Resolve();
return descriptor.ImplementationFactory(serviceProvider);


But then how will this fit with the 9 steps above in Registering Services? Well, it changes a little bit to become like this (red and strike-through’s are the changes):

  1. Hosting engine will create an IServiceCollection, which is a collection of ServiceDescriptors
  2. Hosting engine will add all the services it needs to the list
  3. Hosting engine will ensure that there is a Startup class in your assembly and that it has a method called ConfigureServices. First it will look for the form that returns an IServiceProvider, if not found then it uses the one that returns nothing
  4. Hosting engine will load this method and call it passing the IServiceCollection
  5. ConfigureServices in the Startup class will add the apps services to the list
  6. ConfigureServices will create the IoC container of choice
  7. ConfigureServices will register all the services in the IServiceCollection to the new container
  8. ConfigureService will make sure to register the IServiceScope and IServiceScopeFactory with the services (remember step 2 in Running a Request above?)
  9. ConfigureServices will create an instance of the container’s implementation of the IServiceProvider and return it
  10. Hosting engine will create a DefaultServiceProvider and use the information in IServiceCollection to register the services to the DefaultServiceProvider The Hosting engine will retrieve the IServiceProvider supplied
  11. Hosting engine will create the Application Builder (IApplicationBuilder) and assign the new ServiceProvider to the property IApplicationBuilder.ApplicationServices so it can use it further down
  12. Hosting engine will add a Middleware before giving the chance for the Startup.Configure to run, placing it to be the first Middleware in the pipeline. The Middleware is called RequestServicesContainerMiddleware, which will be discussed shortly.
  13. Hosting engine will call Configure method in Startup class passing the Application Builder to build the Middleware pipeline where the Service Provider can be used through the ApplicationServices property to build the Middleware if needed

Voila! all is ready


I hope by now there is no magic in how dependency injection really works in ASP.NET 5, if you have questions or comments feel free leave it in the comments section.

Tips’n Tricks

  • In order for you debug this whole process and step in the code you need to do two things:
    • Get the code by checking out the repositories from GitHub and make sure you are on one release tag (like beta4)
    • Create a web app in Visual Studio
    • Alter the “global.json” file so you add the paths to the repositories source to the “projects” key like this *
    • Now you have the code in your hands and can step through it
  • Code of interest:

ANZCoders Wrapup

Over the last week, the first ANZCoders virtual conference was taking place, the conference that you can attend in your pyjamas! Fifteen sessions over five days by twelve speakers, all voted upon by the audience themselves.

The conference was live, but it was recorded also on Youtube; every session has its own Youtube video available for watching any time. So I hear you say “Why attend live if the video is going to be available later?!”…here is why:

  1. The live Q & A: after the session the audience was given the chance to ask the speaker questions, just like any real conference; which is something that is not available for people watching the video later.
  2. The live discussion: As the speaker was running through the session, the chat channel was humming with all sorts of different opinions, supporting stories, links to resources, and there lots of laughs that made the conference even more fun! although this might sound distracting a little bit for both the audience and the speaker, but IMHO the benefits overcame the drawbacks.
  3. The people: connecting with such intelligent and passionate people was invaluable! Enough said.

The only drawback I guess was the reliability of the speaker’s internet connection; I for example lost at least 6 valuable minutes of my “IoC in ASPNET5” talk, even with my best arrangements to have proper connectivity! (Yes, you need to skip past the minutes from 2:30 to 8:30). But hey, a speaker can have a flu in face-to-face conferences as well ;).

Will I participate in a live virtual conference again? Absolutely!

So big thanks to Richard Banks for organizing the conference, and big thanks to the sponsors, the speakers, and the lovely audience who made this event a success!


Consultant Skills: Self-Confidence

Look into the mirror, Increase self-confidence

I have blogged before about some of the skills that a consultant should be acquainted with like Story Telling, Knowledge Depth & Breadth, and Having an Opinion, all of which I see very important. But in this post I would not hesitate to say that self-confidence is the single most important amongst them all!

Before we continue, what does “self-confidence” mean? my words of choice would be: “it is the belief someone has about his/her capability of accomplishing something that he/she hasn’t tried before“. Notice here that it’s a “belief”

Lacking Self-Confidence Is Bad

“How come you think it is the most important?” I hear you ask. Well, if you have been following my posts you will clearly see that I love bullet points, so let me list how lack of self-confidence is bad:

  • Lack of self-confidence is the chuckles and chains that the individual would willingly put around his neck, preventing himself from achieving even the simplest of goals, even if he has the potentials and capabilities; he might be smart, thoughtful, knowledgeable, and resourceful, but he will not utilize any of these treats because he thinks he doesn’t have them, nill, zero!
  • The main job of a consultant is to solve the problems of his clients; the client is clueless, confused, lost, in doubts, and he needs help, he needs someone to rescue him from the trouble he is in. Imagine yourself to be such a client, would you accept a consultant’s help if he wasn’t in a better state than you are?  if the consultant himself is not sure of his capabilities, he doubts his skills, will you still hand your problem to him to solve?    For employees it might be different; the employee might be considered as an investment the company or management might invest in, so he receives the encouragement and support to get him going, increase self-confidence if he lacks it, consultants on the other hand don’t have this luxury.
  • Even if you are not taking clients, your image seen by people and peers will be affected. People will see you as you see yourself; if you see yourself as someone who can provide solutions and solve problems, you will be looked at as such person, if you see yourself as weak, stupid, or failure then, no surprise, you will be looked at as such person.
  • Self-doubt brings depression, with various of levels, if not controlled.
  • If you find these points aren’t bad enough, then re-read the first point above

OK it’s obvious how tremendously dangerous this is, but how to solve this?

How to Increase Self-Confidence

Shall we list:

  • Acknowledge the problem; this is going to be the driver force of change; realization. You have to realise how big of a problem this is, and realise the grave effect it has on you. This is a state that we should have absolutely zero tolerance with.
  • Remember the “belief” part of the definition? you have to find a proof that supports your belief, why do you think you can’t do it? can you prove that you can’t? I would say help yourself out and whenever you are in doubt try to remember all the success stories and accomplishments you have achieved in your life that can be compared to the situation you are in, and use it as a proof that you can.
    What if you can’t find a proof of success? well, at least you don’t have a proof of failure! so you can’t be doubting yourself!
    What if you DO have a proof of failure? then you should ask yourself: “was it the same circumstances? have I changed since then? is this situation exactly like the one I failed in?” if your answer was no, and most likely it is, then this can’t be used as a proof, and we are back to the fact that you don’t really know if you can’t accomplish! if your answer was yes though, then let’s examine the next point.
  • Ask yourself “what can I do to do things differently this time? How can I change the reason that caused my previous failure, so it is not any more?” and this is a competitive advantage over confident people, you are URGED to think more, try harder, prepare better, and find a better way of doing things. Remember the turtle and rabbit race story?
  • Self-doubt and the fear of failure goes hand in hand; we think we can’t accomplish, consequently we think that if we try we will fail, and failing is a big problem, right? well, no! whenever I am anxious about something, my wise wife asks me “what is the worst thing that can happen?”; we tend to build feelings of fear, based on our implicit imagination, that are much greater than what they would really be if the failure really happens. So thinking about the worst case scenario would invalidate these fears and give them their proper size.
  • In most of the cases, the opportunity cost is much greater than the failure cost. If self-doubt prevented you from trying to take a leading position, then your loss of losing that position is greater than the loss you would have trying and failing (in that attempt); at least you will learn how to do it better next attempt. It might a simple sentence, but think about it for a minute, absorb it…what do you think?

Finally, know that this is a continuous struggle with yourself, this will never end! if you grow you will be exposed to new things, if you are exposed to new things you will doubt yourself, period! hopefully, though, with these tips and tricks you will be in control, you will use your self-doubt to your advantage and move forward successfully.

Now go and nail it!

[Image Credit: Kevin Cawley]

Consultant Skills: Having an Opinion

This is the third of three posts I’ve written about consultant’s skills, check the previous posts if you like:


We work in an industry where one general problem can be solved by too many ways, each emerges from a different mindset and different circumstances. But also, in this industry there is community pressure, there are “cool geeks” and “cool solutions”, there are “best practices”, and there are hypes and fads.

And you guessed right, my dear reader, many of us lease their brains to these influences; if it is individuals, we wait for their blog post or tweets, if it is organizations we wait their radar, or if it is a community we count how many use it, …etc.
And here I would bring up two questions:

  • Why do we do that?
  • And what is the impact of this behavior on us as individual experts, and the industry?

Answering these two questions should positively change the way we think of technology, and how to interact with it. So let’s attempt answering them:

Why do we do that?


  • Thinking is heavy, and we are lazy! having an opinion about a certain solution, a database engine, an architectural design, or an open source project…having an opinion requires knowledge that needs to be acquired, and requires time to sit and think of all the scenarios in which this solution can be suitable, or not. All of this is heavy and takes a lot of effort, so instead of taking that burden we seek ready answers
  • We are afraid to be judged. I’d want to give an opinion about this framework but I am afraid to criticize it, I want to say that I don’t like it for that reason, but I am afraid that my opinion would turn out wrong, or “stupid” in the eyes of others. So I’d rather keep quiet, right?
  • Lack of self confidence. And this is different from the point (2) above, here I don’t care what others think of me, but I am not sure of my brain capabilities; am I really intelligent enough to judge this framework? we put ourselves down to the extent that we don’t even ask ourselves if we are smart enough or not! We just believe we aren’t, and consequently never think of articulating an opinion at all.

What is the impact of not having an opinion?

  • We become mentally disabled! We become too dependent on others to the extent that we can’t intellectually live on our own. What if circumstances push us to situations where the capability of getting external help is narrow?
  • Opinions supported by proofs drive solutions. If we don’t have opinions we transform from developers to coders, we loose our value as solution providers, which is bad for us economically as the demand on us in the market diminishes, and more importantly bad to our self-respect; what are we other than the value we bring to the world? you must have heard of the “The surprising truth about what motivates us
  • Others give us THEIR solutions, that solved THEIR problems, which will not necessarily solve ours.
  • We damage the industry as we shutdown intellectual power that would’ve enriched the industry, no matter how small that would be.

So, if you agree with me to the points above, then let’s check some of the suggestions that would enable me and you to form opinions, and hopefully good ones.

How to form an opinion?

  • Don’t underestimate your mental capabilities. This is the most important point! At the end of the day it’s mere logic, and we all have logic; in general fast is better than slow, simple is better than complex, cheep is better than expensive, …etc.
  • Try answering your own questions. For example, you see more ORMs coming to existence and more people using them, you used it yourself, but you never had an opinion about them. One time you wonder if they really provide a value, or if they are just a waste, so you instantly think of your really smart colleague, he must have an answer. My suggestion to you is just before you ask him, try to answer the question yourself; doing so explicitly will force you to think, you might surprise yourself!
  • Learn from others, observe their opinions. And no, I am not saying to adopt their opinion, what I am saying is that forming an opinion is a skill that can be learned. Check how they approach the product, what they see as weaknesses and why, and what they see as power and why. The better you observe smart people’s opinions, the better you can form one
  • Acquire and then utilize knowledge. In order to form an opinion about something you have to know something about it! the more you know, the closer to being correct your opinion is, so you better do your homework and acquire knowledge as much as you can. Of course up to some point you will not have the time, capacity, or resources to know more, in that case you build it according to the knowledge you accumulated, the catch here though is to declare that amount of knowledge when you give your opinion.


  • We still need to ask experts. In fact, I even encourage you to do so, but the only thing I am asking is not to follow it blindly! The idea here is to be able to judge for ourselves which solution is more solid than another using our own logic, even if it means we have to weigh between experts opinions.
  • You don’t have to have an opinion about everything, but at least on things that affect your technical life.


You want to be more valuable? You want to grow your career? You want to be independent? Then have an opinion.

9 Things I Learned From Skiing

I had the opportunity to visit Mount Buller to do skiing for the first time in my life (thanks to Joshua McKinny), and the experience was UNBELIEVABLE! In addition to the loads of fun I had, I learned some life lessons that can be applied in any field, even software! and I’d like to share with you.


  1. When in doubt, get rid of doubt.
    A night before the trip, I was at the shops. I saw this winter warm hat that I thought I should buy, then I remembered I had one at home but I wasn’t sure if I still had it; I remembered one time when I needed it and I couldn’t find it, so I had doubts!
    I had to take a decision, something like the following diagram:

    Apparently possibility D has a much higher cost compared to any A, B, and C. And the only way to avoid it is to take decision X rather than Y… I chose Y, and ended up with D!
    At the end I managed, but let’s just say I was left in an embarrassing situation.
  2. Everything has pros and cons
    Between choosing Skiing and Snowboarding, I chose Skiing; the majority of opinions was that Skiing is easier and you have more control, which is better for beginners. But this came with a cost; the boots were horrific to walk with, and I had to carry two sliders and two poles all around! I don’t regret my choice, but let’s say I am a lot more aware that there are cons with every choice we make, the question is are we willing to make the most of it or not.
  3. Listen to the experts
    Josh gave us of a long list of advice beforehand; what stuff to bring along, where to get the gears from, what we should expect…, all of which made a huge difference. And whenever you fail to listen you are beaten, I will never forget a scarf when I am at the top of the mountain!
  4. It’s more difficult than I thought
    I judged Skiing too early, from the videos and photos I have seen in my life, it seemed too easy! just go in angles and change the direction once you reach the edge of the slope, and repeat until you reach the end. Guess what? Easier said than done!
    Skiing literally is sliding on a slippery surface, while you try your best to control sliding; the sliders are long, heavy, and go in all directions, the amount of effort you have to give to control the sliders to go in a certain direction is big, and they don’t just listen! the angles in which you have to position them to accomplish that control is tricky, the pressure on your knees is enormous, your body’s position makes a big difference, and the slightest loss of control of the sliders your body will start wobbling, and you don’t just cross your legs to fix that! Oh and did I mention that there are types of snow, some of which makes things even harder?…ENHALES!
    The idea of this lesson, don’t just underestimate and judge too quickly, anything, unless you try it out first.
  5. You don’t know what you are missing, until you try it. It’s loads of fun!
    Sometimes we are just too lazy, and due to our laziness (or let’s say “comfort zone”) we miss out on too many opportunities. I knew that it’s going to take me a day, and it’s going to be cold, and I have to learn skiing, and was afraid that I wouldn’t enjoy it…but…I pushed myself; I also knew that this is not going to happen again anytime soon and should tick it off my bucket list, let me tell you this: IT WAS AWESOME! Doing it myself revealed many aspect I’d never get from watching a video, the mere speed a human can reach on these sliders is of utmost thrill, let alone the joy when you really start controlling it.
    It really made me think of all the things I might be missing due to the same reason, whether it was leisure or career opportunities
  6. You are going to fall, and it’s going to hurt
    There is absolutely no escape from falling, unless you are an expert, than you already have fallen plenty of times, and to the surprise, it hurts! I fell so many times: one time on my arm which ended up swollen, one time twisted my leg, and another time was displaced couple of meters away from my slider after it flew off.
    These falls were necessary; I knew exactly what to do, and what NOT to do, because I didn’t only “hear” about the consequences, I lived them, and they hurt! So because of these falls I had to learn, because of these falls I became a better skier
  7. In fact pain is part of the fun
    The falls mentioned in point 5 were painful indeed, but they also were fun; it breaks the routine of the body, the monotonous experience we go through in our lives, being thrown and twisted in the air, and feeling your body going through a different experience, all of this had its flavor, it might be funny, but it really did (don’t break something while you do that).
    But more important than that, these falls also gave a better meaning for success; when I slide for longer periods without falling, the feeling of success I have is deep, and meaningful. If it was too easy that success would taste like…meh.
  8. Following instructions is important, but so is following instinct
    I had a lesson by an instructor for I am an absolute newbie, the instructor gave us the instructions on how to stop and manuever, along with some other instructions, and then released us to the wild. I tried to follow all his instructions perfectly, usually I am a good student, but I still kept falling!
    Then at one of the slopes I felt like I should be leaning my body in a certain angle, and press with my toes down, it was an absolutely instinctive feeling, not a trial and error thing, and guess what…it worked! the instructor didn’t mention this; maybe because he never really gave it a deep thought, maybe he has been skiing all his life, regardless of the reason, he gave me instructions that weren’t enough, I had to use my instinct that proved highly valuable in addition to the external knowledge.
  9. Most importantly, company is everything
    This, my dear reader, was of the utmost importance; Josh and Neil were extremely good company, very understanding, patient with my primitive skiing skills, easy going with suggestions, generous, and full of knowledge that filled the trip with beneficial discussions. All of which allowed me to enjoy things enough to come up with the previous 8 lessons!

Did I learn more lessons? indeed, but 9 is a nice number 😉