Author Archives: Emad Alashi

RBAC in Azure Kubernetes Service AKS on Twitch!

tldr; I will be streaming on Twitch next Monday (25th of March) at 8:30 Melbourne time (GMT+11), configuring Azure Kubernetes AKS to use RBAC.

Twitch logo

For a long while, I’ve been thinking about streaming live development to Twitch or YouTube. Having spent some time behind the microphone while making DotNetArabi podcast, I can say there is a satisfiying feeling in producing content in a media format through which you can connect with the audience.

Why not just offline video?

I could just record an offline video and host it on YouTube, and it’s definitely a valuable medium. The problem with educational videos, specifically, is that it is a one-way communication channel, and without the entertainment factor, unlike movies, these videos can be daunting, imprisoning, and hard to follow.

The magic of live streaming

But with live streaming magic happens; it adds additional dimensions that make it more appealing:

  1. It’s LIVE! It’s happening NOW, and this means couple of things: it implicitly has the anticipation factor; things are still happening and it might take interesting turns, just like live sports. In addition to that, by sharing the time span during which the event is happening, the audience gets the feeling of involvement and “I was there when it happened”, even if the audience didn’t directly interact with the broadcaster.
  2. It’s real and revealing: When I was doing my homework preparing for this, I talked to my colleague Thomas Koster, and when I asked him about what could interest him in live streaming, his answer was:
    …it’s probably more the real time nature of it that appeals – to see somebody’s thought processes in action, as long as the broadcaster doesn’t waste too much time going around in circles.
    For example, watching somebody figure out a puzzle solution in the game The Witness in real time is much more interesting and valuable than watching a rehearsed, prepared performance of only the final solution.

    This is the ultimate stage for a developer broadcaster; it requires a lot of bravery and experience. I’d love to be able to do this soon, but it’s really the 3rd reason below that drew me to streaming.

  3. It’s two-way communication: the interactive communication between the broadcaster and the audience brings the video to life. It provides timely opportunity to get the best out of this communication, whether it was by the audience correcting the broadcaster, or the broadcaster being available for immediate inquiries.

Specifically for this last reason, I became interested in live streaming; I want this relation with my audience; to have a collaborative experience where value is coming from everyone and going in all directions.

So, I am doing my first stream!

I have been following Jeff Fritz @csharpfritz and Suz Hinton @noopkat and greatly inspired by their amazing work! Also @geoffreyhuntley have started his journey and gave me the last nudge to jump into this space. I’ve learned a lot from Suz’s post “Lessons from my first year of live coding on Twitch“, and recently Jeff’s “Live Streaming Setup – 2019 Edition” (don’t let it scare you,  you don’t have to do it all!).

My next stream will be about Role Based Access Control (RBAC) in Azure Kubernetes AKS, I will walk you through RBAC, OAuth2 Device Flow, and how this works within Azure AKS, with hands-on live deployments and configuration.

What is my goal, and what is not?

What I am trying to achieve here is two-way communication through the session I have with my audience, that’s it.

Am I going to do this constantly now?

Actually, I don’t know! To me this is an experiment; I might keep doing it, or this might be my first AND LAST stream, let’s see what the future brings. :)

Fix “Mixed Content” When Using Cloudflare SSL And IIS Rewrites

In this post, I explain how I fixed the “mixed content” security issue when using Cloudflare Flexible SSL, and IIS Rewrite.

I Run Two Websites Under One Account Using IIS Rewrites

I have two websites that are hosted under one account with my hosting provider (I know!): https://emadashi.com and https://dotnetarabi.com. The way I do it is that is by using IIS Rewrite rules in my web.config; any request that is targeting one of these domains, I “rewrite” the URL so it is pointing to the sub-directory to serve the request. This changes where the file is served from, but does not change the request URL to the user.

However, if by any chance a request came to the server targeting the sub-directory itself,  that page will still be served as is, which is not desirable as I don’t want to expose the inner of my websites; it’s ugly and bad for my websites’ URL discovery. In this case, first I want to “redirect” the user to point to the domain without the sub-directory; and then run the rewrite rule as mentioned above, which I did.

In psudo, when a request comes the execution of the rules looks like this:

  1. Rule1: Does the URL include a sub-directory? If so then Redirect to the same URL without the sub-directory.
  2. Rule2: The URL does not include the sub-directory, so Rewrite (not Redirect) to the sub-directory.

I want to Serve My Websites Over HTTPS, But…

Now when I wanted to secure my websites and start using HTTPS to serve requests, thanks to Troy Hunt’s continuous nagging :P, I couldn’t just use normal certs with my hosting due to the way I am running it. So again, based on Troy Hunt’s awareness efforts, I used Cloudflare’s Flexible SSL free service.

This went fine until I discovered that engine of dotnetarabi generated guests images’s URLs including the sub-directory. When I open dotnetarabi over HTTP, the first request to these URLs is HTTPS, but of course containing the sub-directory, the second request though (which is a redirect to the URL without the sub-directory) is always coming back as HTTP! This caused the known “unsecure; mixed content” problem.

Simply, the reason is that:

  1. With Flexible SSL, Cloudflare communicates to your server view HTTP ALWAYS; you don’t have certs, this is why you need them in the first place!
  2. Cloudflare Flexible SSL doesn’t force HTTPS if you haven’t explicitly asked it to (via the Always Use HTTPS option). So if the request came view HTTP, it will pass it through as HTTP.

So in the the case of my redirects above, what happens is the following:

  1. The request comes to Cloudflare via HTTPS, the URL include the sub-directory
  2. The request is forwarded to my server via HTTP (NOT HTTPS!) to the sub-directory
  3. My server innocently redirects the request to the URL without the sub-directory, but using the same protocol the current request is using, which is HTTP because it will always be!
  4. The user receives the redirection to the new URL, but with the HTTP protocol this time, and then Cloudflare just passes it through because it does not force HTTPS.

The solution

The trick was that it’s true that Cloudflare does not use HTTPS when it forwards the request to your server, but what it does is that it adds the header X-FORWARDED-PROTO=https to the requests to your server if the original request was using HTTPS.

So, all what I needed to do is to check on this header in my redirects; if it exists then redirect to HTTPS, otherwise redirect to HTTP:

The Action part of my rule:

<action type="Redirect" url="{MapSSL:{HTTP_X_FORWARDED_PROTO}}dotnetarabi.com/{C:1}" appendQueryString="true" logRewrittenUrl="false" />
<rewriteMaps>
  <rewriteMap name="MapSSL" defaultValue="https://">
    <add key="https" value="https://" />
    <add key="http" value="http://" />
  </rewriteMap>
</rewriteMaps>

 

HTTP Binding in PowerShell Azure Functions

In a small project, I was trying to utilize an existing PowerShell I had, and host it in Azure Functions; I needed to understand how HTTP binding work with PowerShell Azure Functions as I didn’t want to rewrite my script to C# just because the PowerShell Azure Functions had the “(Preview)” appended to its name.

I wanted the Function to return a plain text response to an HTTP trigger based on a query parameter (this is how Dropbox verifies Webhook URLs). So, naively, I followed the basic template as an example:

Write-Output "PowerShell HTTP function invoked"

if ($req_query_name) 
{
	$message = "$req_query_name"
}
else
{
	$message = "wrong!"
}

[io.file]::WriteAllText($res, $message)

The first question I had was “how is the querystring parsed?” I assumed that I should replace “req_query_name” with the querystring key in the request, should I replace the whole thing to become $myQueryParam? This is when I decided to look in the source code rather than the documentation.

Note: I try to link back to the source code wherever I can, the problem is the link does not include the commit ID, so next to the link I put the commit ID at which the file was in that state.

HTTP Binding

There are different phases that take place during a Function execution, in this post I will skip the details of how the binding is loaded, and only concentrate on how the HTTP binding operates within a PowerShell Function.

Input

When the Azure Functions runtime receives an HTTP message for PowerShell script that has HTTP binding, it parses the message according to the following:

  • The body of the HTTP request will be saved to a temp file, the path of the temp file will be assigned to an environment variable that matches the “Name” property of the input binding configuration. If we take the following JSON as an example for our “function.json” configuration, then the name of the variable will be “req“:
    {
       "bindings": [
       {
         "name": "req",
         "type": "httpTrigger",
         "direction": "in",
         "authLevel": "function"
        },
        {
          "name": "res",
          "type": "http",
          "direction": "out"
        }
      ],
      "disabled": false
    }
    

    (This happens here at dcc9e1d)

  • The original URL will be saved in environment variable “REQ_ORIGINAL_URL“.
  • The HTTP request method will be saved in environment variable “REQ_METHOD“.
  • For each HTTP header “key”, a corresponding environment variable “REQ_HEADERS_key” will be created
  • The full querystring will be saved in environment variable “REQ_QUERY“, it will also be further parsed into individual variables; for each query string “key”, a corresponding variable “REQ_QUERY_key” will be created.

All of this happen before the execution of the Function, so once the Function is invoked these variables are already available for consumption. (This happens here at dcc9e1d ).

To read the body of the request you just read it as you read any file PowerShell, and then you parse it according to the content; so if the body of the request is JSON you read the file and parse it to JSON like the following:

$mycontent = Get-Content $req | ConvertFrom-Json

Note: If the Function is executing because of a Triggered bindings (such as HTTP), the rest of the input bindings are skipped. (Check the code here at commit dcc9e1d)

Output

Similar to the request, your script should write the response to a file, which in turn will be read by the Azure Functions runtime, and then will pass it to the HTTP output binding to send it on your behalf . The runtime will also assign the path of this file to an environment variable that  matches the Name property you define in the output binding in the function.json.

So for the example above of function.json, you will write the content of your response to the file whose path is stored in “res”:

[io.file]::WriteAllText($res, $message)

This happens here at commit dcc9e1d.

Default Behaviour

Now, if the content you write to the file is a string that cannot be parsed to JSON, then: it will be considered as the body of the HtttpMessage,  the response will have the default HTTP content-type “application/json”, and it will be run through the default MediaTypeFormatter. Take the following as an example:

Function:

 $message = "This is a text"
[System.IO.File]::WriteAllText($res,$message)

Result:

Content-Type: application\json

"this is a text"

Notice that the text written to the file in the script is without quotes, but the result in the response body is in double quotes; this is because the default content-type of the response is “application/json”, and the HTTP binding will format it accordingly and wrapp in double quotes.

More Control

If we want more control over the response then you have to write JSON object to the file, this JSON object will hold all the information on how the response should look like: the headers, the body, and the response status.

The JSON object should contain the properties: “body“, “headers“, “isRaw” (more about it below), and “statusCode” (int) if you want to change any. For example, if I want the content of the response to be simple text with plain/text content-type , then the script should write the following:

$message = "{ `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"$name`"}"
[System.IO.File]::WriteAllText($res,$message)

There are several points that need to be brought up:

  1. If the “body” property exists, then only the value of the “body” property will be in the HttpMessage body, otherwise the whole content of the JSON object will be in the HttpMessage body.
  2. Up until the time of writing this post, Azure PowerShell functions runs under PowerShell 4.0, this means that if you want to use the Out-File command to write to the file, then it will always append a new line feed (\r\n) at the end of the string, even if you supply the -NoNewLine parameter! Use the WriteAllText command instead.

The parsing can be found here at commit 3b3e8cb.

Formatters

Great, so far we managed to change the body, the headers (including the content-type), and the status of the response. But this is also not enough; depending on the content-type header, the Azure Functions runtime will find the right MediaFormatter for the content and format the response body with the right format.

There are several types of MediaFormatters in the System.Net.Http.Formatting library: JsonMediaTypeFormatter, FormUrlMediaFormatter, XmlMediaTypeFormatter, and others. The issue with the formatters is that it might add the UTF-8 Byte Order Mark (BOM) at the beginning of the content. If the recipient is not ready for this it might cause a problem.

Dropbox, for example, provides a way to watch the changes to a file through their API by registering a webhook, and the way Dropbox verifies the webhook is by making a request to the endpoint with a specific querystring, then it expects the webhook to respond by echoing the querystring back. When I created my Function I didn’t change anything, thus the runtime used the default formatter and appended the UTF-8 BOM characters (0xEF,0xBB,0xBF) to the beginning of the body, which of course was revoked by Dropbox.

The way to skip these formatters is by setting the “isRaw” property mentioned above to true. For example, the following script will write a plain text “emad1234” to the response:

$message = "{ `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"emad1234`" }"


Taking a screenshot from Fiddler from the HexView view, the response look like this:

 

BOM characters in response of PowerShell Azure Function

Have you noticed the characters I surrounded with the red box? that’s the BOM, displayed as ““.

But once we add the “isRaw” property like this:

$message = "{ `"isRaw`": true, `"headers`":{`"content-type`":`"text/plain`"}, `"body`":`"emad1234`" }"

The result will be without the BOM:

FiddlerAzureFunctionWithNoBOM

This can be found here at commit 3b3e8cb.

 

Final Notes

It’s worth mentioning that Azure Functions runtime also provides content-negotiation feature, and you can leave it to the request to decide.

Another departing thought is that of course you don’t have to craft your JSON object by concatenating strings together, you can use PowerShell arrays and hashtables to do that, check the articles here and here.

Finally, isn’t it awesome to be able to see that in the source code!

Conclusion

PowerShell probably is the language that got the least love from the Azure Functions team, but this does not mean that you throw your scripts away, hopefully with the tips in this post you will find a way to use them again.

مساعدة في دوت نت عربي

نشأ دوت نت عربي منذ ثمان سنوات ليكون من أوائل المواقع العربية التي تقدم محتوى عربيا ذا جودة عالية، قدم من خلالها العديد من الحلقات مع نجوم تقنيين عرب أصحاب خبرة طويلة و أداء مميز. بدأ البودكاست بجهود فردية و نفقة شخصية غير ربحية، و استمر عدة سنوات بأداء جيد و بمعدل حلقة كل أربعة أسابيع و بشكل مستمر.

و لكن خلال السنتين السابقتين بدأ إصدار الحلقات بالتباطئ و باتت الفترة بين الحلقة و الأخرى تطول على الرغم من كل محاولات زيادة الانتاج. فكرة إخراج العمل من دائرة العمل الفردي إلى دائرة العمل الجماعي لم تغب عني و منذ سنوات، لكن لم أستطع إيجاد آلية واضحة و عملية يمكن الاعتماد عليها لتحويل العمل من فردي إلى جماعي تطوعي، و يمكن من خلالها اغتنام ما قدمه بعض المستمعين المخلصين من رغبة في المشاركة في هذا العمل. استمر الأمر كما هو عليه حتى كان لا بد من الخوض في فكرة العمل الجماعي التطوعي حتى و لو بأبسط الأدوات. و “أن تأت متأخرا خيرا من أن لا تأت”.

و بناء على هذا، و بناء على استشارة بعض الأصدقاء و الأصحاب، أود أن أفتح باب المشاركة في دوت نت عربي لإنتاج الحلقات بشكل أسرع و بجودة عالية. لتسهيل عملية المشاركة لا بد من شرح عملية إنتاج الحلقات و سرد الخطوات، و بالتالي سيسهل على المتطوع اختيار ما يمكن المساهمة به.

خطوات الإنتاج

أولا: إيجاد الضيف المناسب

في هذه الخطوة أقوم بالبحث عن ضيف مناسب للبرنامج. يتطلب من الضيف أن يكون صاحب خبرة في مجاله، و الطرق المتاحة لإثبات هذا هي:
• البحث عن إصدارات و منشورات للضيف مثل مدونة أو مقالات ذات جودة عالية.
• البحث عن مساهمات للضيف على موقع GitHub.
• حيازته على منصب تقني متقدم في شركته
• أو أن يتم التدليل عليه من شخص موثوق بشكل مباشر
لا بد من التنويه هنا أننا لا نحصر الحرفية في من حاز هذه المناصب أو الإنجازات، فهناك الكثير من المحترفين الذين لم تسنح لهم الفرصة للقيام بهذه الأعمال، لكن بالنسبة لدوت نت عربي هذه هي الطريقة المتاحة للتأكد من قدرة الضيف.

فمن يرغب بالتطوع لهذه المهمة سيقوم بالبحث عن الضيف و من ثم سيطلعني على بعض الروابط التي وجدها التي تسرد إنجازات الضيف.

يجدر بالذكر أن هذه الخطوة مفتوحة للجميع دون الحاجة لتنسيق.

ثانيا: ترتيب الموعد

في هذه الخطوة أقوم بالاتصال بالضيف و إخباره عن دوت نت عربي، و أعرض عليه بتسجيل حلقة معه. إن قبل الضيف نشرع بترتيب موعد لتسجيل الحلقة و طرح النقاط المحورية في الحلقة المرتقبة.

ثالثا: تسجيل الحلقة

في هذه الخطوة يتم تسجيل الحلقة مع الضيف من خلال سكايب Skype.

في الخطوتين السابقتين: “ترتيب الموعد” و “تسجيل الحلقة” أظنه من الصعب أن يقوم بهذه الخطوة غير مقدم البرنامج.

رابعا: الإنتاج الفني

بعد تسجيل الحلقة ينتج ملف صوتي MP3 يحتاج لمعالجة، و هي تتضمن ما يلي:
• قص المقاطع التي فيها أخطاء و عثرات و إطالات غير مرغوبة مثل: “آااااا”
• تحسين جودة الصوت من خلال تصفيته بالمصفيات الصوتية التقنية
• إنشاء ملف MP3 جديد و تعديل خصائصه مثل عنوان الملف، و الأيقونة، و غيرها.

تتطلب هذه الخطوة بعض الفنيات، لا تحتاج الكثير من العلم لكن تحتاج إلى الممارسة. و لذلك فإنه من المتوقع أن أقوم بتدريب المتطوع على كيفية الإنتاج، و أن تتم مراجعة الحلقات الأولى بشكل دقيق قبل تسليم المهمة بشكل تام.

خامسا: نشر الحلقة

تتضمن هذه الخطوة رفع ملف الـ MP3 إلى الموقع، و كتابة المقدمة عل الموقع، و نشر الخبر على مواقع التواصل الاجتماعي. و هذه الخطوة أيضا تتضمن بعض المعلومات التقنية، سأساعد من يتقدم لهذه المهمة في البداية بالتأكيد.

و بهذه الخطوات الخمس تتم الحلقة و يبدأ المشوار بحلقة أخرى.

آلية التعاون

سيسرد المتطوع المهام التي يرغب بالتطوع لها، و قد يتقدم لنفس المهمة عدد من المتطوعين. بناء على ذلك سيكون لكل حلقة تنسيق مختلف يعتمد على جدول المتطوع و قدرته على توفير الوقت. الأداة التي اخترتها لتنسيق هذه الخطوات بين المتطوعين هي تريللو trello.com و هي أداة مبنية على فكرة ما يقال له “لوح كانبان Kanban Board” حيث سيكون لكل حلقة بطاقة تنتقل بين الخطوات التي ستمثل على شكل عمدان على هذا اللوح.

سيتاح لكل متطوع التقاط بطاقة في عامود خطوة معينة يرغب في العمل بها، سيسندها لنفسه حتى إنهاء العملية ثم يدفعها لعامود الخطوة التالية، و هكذا.

“ما الفائدة التي سأحصل عليها إن تطوعت؟”

و قد يسأل سائل: “ما الفائدة التي سأحصل عليها إن تطوعت؟”، إضافة إلى إسهامك في تنمية معلومات الآخرين و إثراء المحتوى العربي على الإنترنت، سيتم شكر كل من يتطوع للمشاركة في هذا العمل، و بما أن دوت نت عربي ليس مؤسسة ربحية سيكون الشكر بذكر كل من شارك في إصدار الحلقة في ملخص الحلقة على الموقع.

ماذا الآن؟

إذا كنت ترغب في المشاركة في إنتاج حلقات دوت نت عربي أرسل رسالة إلى: “emad.ashi” على بريد الـجيميل GMail، و سيتم الترتيب معك و شرح ما لم يتسن شرحه في هذا المقال. و إن لم ترغب في المشاركة و كان لديك أي نصيحة أو تعليق أو نقد فرجاء لا تترد بإرساله أيضا.

شكرا لكم على اهتمامكم و لنبق على اتصال.

Productivity Satisfaction Maturity Levels

Such a fancy title ha, probably the influence of our industry (bad influence)! Well you can just substitute it with something like “These are the stages of productivity between which the satisfaction jumps in exponential magnitudes”.

Note: before we check these stages out, it goes without saying that all the “he” in this article are absolutely replaceable with “she”; it’s just that the “he/she” style is too verbose.

0. Ignorance

In this stage the individual doesn’t know what he is missing , he does not add any value to himself or the community; he enjoys “time-waste” activities, or watching TV YouTube. Indeed there is joy in being a couch potato, but it is negligible compared to the next levels, which he hasn’t experienced yet, thus explains why I gave it the number 0.

Note that I am not talking about planned recreation activities after productive accomplishments, I am talking here about this kind of activity as being THE activity the individual’s time is mostly spent on.

Also note that I am not trying to degrade anyone here; people might be in this stage due to circumstances out of their control, or because they haven’t tasted the satisfaction of the next levels.

1. Knowing

In this stage the individual learns something new; he watches documentaries, reads books, …etc., the satisfaction of “knowing” tingles the brain with every new piece of knowledge acquired. It’s an intrinsic part of humans’ nature as beings of intellectual.

This is where the majority of people are, and usually stuck; the number of books read become the gauge of the individual’s pride, not the utilization of the value gained from reading these books.

2. Sharing

Reading books is not enough at this stage; there is an overflow of excitement that is spilling over and around, the minute he sees the others’ reactions when he shares the knowledge the satisfaction doubles, he would look for every occasion at which he can cultivate the excitement of passing the knowledge.

Nonetheless, it’s important to understand sharing knowledge at this level is limited to one-to-one interactions, maximum to a group of friends on a hangout.

3. Doing

The individual has read about his favorite topic too much, he also talked about it to others a lot, e.g. he loves carpentry, he loves reading about it, visiting galleries, appreciating carpenters at work,… now what? He starts doing; he takes the first step in transforming this knowledge to action: he buys the tools, and he starts working on his first piece.
He also discovers how difficult it is, he might hit some frustrations, but he keeps going on small but steady steps, until he creates his first piece! Once he finishes, the satisfaction is indescribable! He keeps looking at it, in his mind it echoes: “this is me, I did this!”, “this piece of art didn’t exist before I started working it”, “this solution solves that problem I had”, “I added value”.

This phase, though, is very difficult to step into, and there are several reasons why:

  • It’s not easy to be discovered; the majority of people are not doing it, it doesn’t occur to him that there is more than sharing, and that there is a greater satisfaction from just knowing.
  • Lack of self-confidence: even if it occurs to him that doing could be much more satisfactory, he does not have the confidence in himself to take the action.
  • Doing can be difficult, expensive, and can require effort and sacrifice. It’s not always easy to do depending your circumstances, or the field you are in, e.g. programming; it’s definitely more accessible to start an Open Source project than be involved in nuclear physics lab to try something out.

Being the most difficult stage to get into, I have to stop here, give a little push and help if I can. I tell you with a very loud and clear slow voice: “IF YOU ARE NOT DOING, YOU ARE MISSING!”, and I am not going to try the “stop procrastinating” or “just do it” style, it’s up to you but you are missing a lot! When you decide between flipping through a game on your mobile, or opening your development IDE, remember that you are giving away a joy that is in magnitudes greater than the joy of playing a game of Sudoku.

4. Influencing

He did, and did , and did more, now he starts presenting in User Groups, he writes about it in his blog, and he teaches it. He thought that the ultimate joy was by doing, but he was wrong; he started seeing others doing because he showed them the path, because he helped, because he provided so much value that it started influencing others to do and add value themselves…BOOM! New level of joy.

This also gives him a boost of endurance and patience to support others; he is happy when he receives an inquiry email or when someone approaches his desk with a consultancy. The success of others become his success.

5. Scaling

What can be after influencing? I can only assume Scaling: in this stage, he probably wrote a book, or became an thought leader, or an international speaker, now he is a public figure. And no no…it’s not the fame I am talking about, it is the notion of the unquantified accumulation of values he added to so many people, a value so big in momentum that brings satisfaction and joy equal to the sum of all of the satisfaction and joy he brought to people by his influence. He bumps into people he never met and they thank him for what he did to them!

Finally, remember that learning never stops, check which stage you are at, and know that there is much more satisfaction in the next, in a nutshell: satisfaction is just a synonym to adding a value.

I Have Been Hacked!

Yes, I’ve been hacked, and it wasn’t fun! In this post I will go through some of the lessons learned. But before that, let’s shed some light on what happened.

It began when a friend of mine notified me that my DotNetArabi blog, which is WordPress blog, has new suspicious and unrelated posts. I rushed to my admin page, deleted these posts, and then changed my password to a stronger one.

I wasn’t so much afraid of the impact; after all this is an Arabic podcast blog while the posts were English. In addition to that, most likely the audience who saw these posts are few (since the posts were recent), and those who saw it would excuse me and understand that something went wrong (I like my audience :P).

After deleting these posts I also thought maybe I should check my folders and files, and indeed when I did, I found that there are hundreds and hundreds of files that aren’t part of WordPress files, most of them created in a single day. Deleting these wasn’t as easy as deleting the posts though; they were many files, they were in different folders, I didn’t know all the WordPress files to distinguish them from these files, my host provider does not provide file management system, and the files didn’t have much in common to find a single rule to delete them by (maybe the date was a good indicative, but wasn’t good enough).

Fair enough, since the harm is quarantined for now (or so I thought!), I decided to take this task on ease by deleting these files in bunches, this decision was also influenced by the fact that FileZilla kept disconnecting; I couldn’t just select many suspicious files and delete them.

Days pass by and I receive an email from my host provider informing me that I have been a victim to a hack; the email listed couple of files as a sample of many files (_the_ files) that are sending spam to others. I already knew about the files, but I didn’t know about the “sending spam” part, of course I should have known better; why would these files exist in the first place?! Duh!

Anyway, my host provider urged me to take action but he didn’t mention any thing about taking measures if I don’t, so I kept doing what I was doing: deleting files on ease, even though that I have received probably another same email or two from my host provider.

A week or so after, my Google Analytics numbers flattened to 0! being lazy (actually I was in the middle of moving houses so I shouldn’t bash myself here :P) I didn’t check what the reason was; I thought I can check it in couple of days, maybe it was the mobile app I am using to read my analytics rather than the analytics themselves.

And then  a different email reaches my inbox: “your website have been suspended for the last 3 days because it’s been a source of spam”! This is when I freaked out; it’s true that I don’t make money of the hits to my blog, but being down for that long is bad bad bad for reputation.

I instantly sent them an email explaining to them how angry I was because of their inadequate notification/action protocol; their initial notifications didn’t mention any threat of closing down the website, and their last notification of closing down the website came 4 days after they have closed it down!

I demanded them to put it up again ASAP, but I also promised to remove the malicious files. They refused! No go live again before we delete all the files.

Being under the pressure, I had to try all sorts of stuff, to the extent that I tried the Windows Explorer’s built-in FTP client, and to my surprise, it worked better than FileZilla! I was happy seeing that green progress bar deleting all these awful files. After I made sure I have deleted everything that looked suspicious to me, I sent the host provider an email again informing them that everything is fine now and my website is ready to go up again (yes, they don’t have chat-support, only email).

Hours and hours later, I receive an email from them again saying that I still have malicious files and “Here is a sample”, the website will not be up until this is solved. This time, though, they provided me with two options: either deleting the whole website and uploading from a backup I have (which is potentially infected as well), or pay for a service on hourly basis to fix the problem for me.

I decided to go with the first option first, but rather than deleting the whole website, I asked them to delete the suspicious folder only. Hours and hours after we managed to do this, and finally my website is up again (I went through more problems after that but maybe we can save this for the list of lessons below).

Not a short story looking at the narration above, now let’s look into the lessons learned and how I can relate things together.

You have a website? You are already a target

Security hasn’t been something I neglected, but it was something that I miscalculated; the hacked part of my website was my podcast DotNetArabi’s blog, and my thinking has always been “Why would someone hack my podcast blog? My audience is very specific; it does not host any sensitive information, the ROI of hacking it is little compared to other sites…, so the possibility of being a victim of hacking is very minimal.

But they weren’t after my website, the content, or my audience; they were after the resources on which my website runs on! My website became a platform to annoy others. I agree, I should’ve known better, but the comfort of not doing a lot to secure my website along with the “low possibility” of being a target made me feel good about not securing my website!

Do you have a website that you manage? GO SECURE IT NOW!! Do all what is necessary to secure it, if it is a WordPress blog check the points below, if not look how to secure it. YOU ARE A TARGET…RUN… NOW!

Don’t be Lazy

One of the reasons why I ended up in a bad situation is that I was a little lazy; I know I was moving houses and was too busy, but I also knew about having the malicious files before, and I took it easy, tsk tsk tsk Emad, bad!

Windows Explorer’s FTP client VS FileZilla

For a long time I looked down to Windows Explorer’s FTP client, especially if compared to products that have been in the market for a long time like FileZilla. To my surprise, for the specific task of deleting files, WE’s FTP client out-performed FileZilla; no disconnections at all. If deleting files wasn’t so difficult task due to the bad tool, I might have been in a better position.

Don’t put all your eggs in one basket

I have one site account with my host in which I put 3 websites; the resources these websites need were really minimal so I just created sub folders and created a web app in each folder: one for my personal blog emadashi.com, one for my DotNetArabi podcast, and a blog for the same podcast. This was made possible by some URL Rewriting tricks.

The plague didn’t hit all of them, it only hit the blog of the podcast, but when the host decided to take the website down it took them all simply because to my host it’s a single website.

Regardless of my host’s decision to take the website down, there are so many things that can go wrong to a website which might affect all the subsites. Separation is good in this case.

Manage your backups

Like I said, I had 3 websites with 3 folders, and so I didn’t manage the backup by the entirety of the website, instead I managed the backups separately. Makes sense? Well, I also had a web.config in the root in which I laid the URL rewriting rules, without which the internal links to my blog posts will be broken (shout out to Maher for his help and notifications). And you guessed right my dear reader, I didn’t backup this one up, in fact I did back it up, but by mere coincidence! *slaps self’s hand*. So make sure you backup your website entirely.

Also, I thought I knew where my backups were, I was wrong! I was disappointed that I had to look for my backups! Are they in the external drive? Are they on my personal computer? Are they in my personal VM on my work computer?

Your host’s influence

This is very important; let’s see:

  • Communication: It was good of my host to notify my of the hack, but also they didn’t give me a clear message on what I should specifically do, and the potential outcomes if I didn’t. Instead of sending me sample files of those malicious files, they could have sent me a list of all the malicious files, saving me (and them) the time and effort to look these up. I can hear you say that this is not their problem, but considering the wasted effort and time they had to give away by the back and forth communication, and spam inflicting their servers …due to all that I reckon it was better if they had just sent me the list of all files.
    Also, they didn’t make it clear that they will shut me down if I don’t delete these files on timely manner, if they did I would have been more active and keen to delete them. My impression was that the effect of these files was minimal.
  • Response Time: my host does not provide chat support, only email; this meant long latency before we could cooperate and solve the problem. Especially the notification of putting my website down after 3 days.
  • To their credit, in their last email after the problem was solved, they suggested couple of points on how to secure a WordPress blog; nothing fancy or detailed, but it was good of them, I guess.

Use scan service?

I deliberately put a question mark at the end of this title; I am not sure how good such services are, my host advised me to use sitelock, but don’t consider this as an advice as I haven’t tried it yet; I just think it’s worth mentioning here.

Securing WordPress

There are numerous content on the web talking about securing WordPress blog, here is one. But without being too sophisticated, this most important things to do:

  • Make sure that the engine is up to date
  • Make sure the plugins are up to date
  • Make sure you use a strong password
  • FTP access: to be able to upload media content to your blog you might need to provide an FTP access (if the installation didn’t do that). If you are hosting your WordPress on Linux, DO NOT GIVE 777 permission!

Conclusion

It was all about me belittling the possibility of being hacked! So let me ask this again: do you have a website? You are already a target, don’t be lazy and go secure it NOW!

“Cloud-Ready Web Apps With ASP.NET 5” – Ignite Australia

It was a wonderful week last week spent in the beautiful Gold Coast after a very interesting Microsoft Ignite conference. I got the opportunity to present on how ASP.NET 5 is designed to be suitable for being hosted on the cloud, the following is the recording of my session:

If you missed the event you can catchup with recordings of the sessions on channel 9, videos are still being uploaded.

Dependency Injection In ASP.NET 5 – One Step Deeper

Dependency Injection has always been an integral part of all the web frameworks under the umbrella of the ASP.NET: Web API, SignalR, and MVC. But historically, these frameworks evolved separately from each other, hence each of these frameworks had its own way of supporting Dependency Injection, even with Katana‘s trial to bring these frameworks together through OWIN, you still needed to do some hackery to have a unified container that supports them all at once. Well, things have changed!

In this post I will dive a little bit deeper than this MSDN post; here we will examine the main interfaces involved, have a small peek inside on how things are running, and explain what it means really to switch to your IoC container of choice.

Abstractions

The decision the ASP.NET team made was to provide the dependency injection functionality through abstracting the most common features of the most popular IoC containers out there, and then allowing the different Middlewares to interact with these interfaces to achieve dependency injection.
ASPNET5 supplies a basic IoC container that implements these interfaces, but also allows the developer to swap this default implementation with their own implementation, through which they can use the IoC container of choice. Usually this is something that is not going to be implemented by the application developer himself rather than something to be implemented by the IoC container maintainers; people behind Autofac, or Ninject…etc.
Having said that, the ASPNET team has provided a basic implementations for the most common IoC containers, but these implementations are most likely to be revised by the IoC maintainers themselves.

Let’s examine the interfaces, shall we?

IServiceProvider

This is the main interface, through which the developer will be able to retrieve the implementation of a service he/she previously registered with the container (we will come to registration later). This interface has one method only: GetService(Type), think of container.Resolve<Service>() in Autofac, or kernel.Get<Service>() in Ninject.

All Middlewares will have access to two IServiceProvider instances:

  • Application-level: made available to the Middleware through HttpContext.ApplicationServices property
  • Request-level: made available to the Middleware through the HttpContext.RequestServices property. This scoped ServiceProvider is created for each request at the very beginning of the request pipeline by an implicit Middleware, and of course this request-level Service Provider will be disposed by the same Middleware at the end of the request just before sending the response back.

Note: I agree, the naming of the ApplicationServices and RequestServices properties might be little bit confusing, but just take it as is for now; these are IServiceProvider.

All the Middlewares will use these properties (hopefully the RequestServices only!) to resolve their services, e.g. the ASP.NET MVC Middleware will create the controllers and their dependencies through the RequestServices (if you don’t believe me check the code, it’s open source ;)), the same goes for creating controllers in Web API …etc.

IServiceScope

Alright, so we said that the RequestServices Service Provider is a scoped container that will be disposed by the end of the request, but how is this managed? You guessed right, by an IServiceScope.

This interface should be a wrapper around a scoped container, whose role is to dispose the container at the end of the request. So naturally it has:

  • IServiceProvider property: the scoped container
  • Dispose() method: by inheriting the IDisposable interface

The question is, who creates the IServiceScope? This brings us to the 3rd interface.

IServiceScopeFactory

Very simple interface as well, it has one method CreateServiceScope() which of course returns a IServiceScope.

So if you maintain an IoC container and you want to use it in place of the default one served by default, you have to implement the above mentioned interfaces.
“But Emad, you didn’t talk about registration of services with the container! And how does it fit all together?!”. Patience my friend, let me just finish this section with the last two classes and then we will jump to the registration.

ServiceLifetime

Enum with 3 keys to define the lifetime of services (objects really):

  • Singleton: single instance throughout the whole application
  • Scoped: single instance within the scoped container
  • Transient: a new instance every time the service is requested

ServiceDescriptor

Finally, the last class! This class is the construct that will hold all the information the container will use in order to register a service correctly; imagine it saying: “hey you, whichever container you are, when you want to register this service make sure it’s a singleton, and take the implementation from this type”. Fancy? Let’s check the members of interest:

  • ServiceType: a property of type Type, this will be the interface for which you will want to substitute with a concrete implementation, e.g. ISchoolTeacher
  • ImplementationType: a property of type Type, this will be the implementation type of the ServiceType above, e.g. SchoolTeacher
  • Lifetime: The lifetime desired for this service: Singleton, Scoped, or Transient.
  • ImplementationFactory: a Func<IServiceProvider, Object>. In some scenarios, the app developer wishes to provide a factory method to instantiate the concrete implementation of the service; maybe there are factors that are outside of the service’s control that mandates how the service should be created, this property will hold this factory method. And yes, it’s mutually exclusive; if you provide an ImplementationType you don’t provide an ImplementationFactory, and vice versa.
  • ImplementationInstance: so you can provide a type as an implementation, and you can provide a factory method to create the object. You also can provide a specific instance, this property of type Object is to hold this instance. Also should be mutually exclusive with the ImplementationType and ImplementationFactory.

Great, now for your application to run as expected, you will have a list of these ServiceDescriptors that you will hand to your container, and tell it to register these services according to how they are described. So let’s look at how this runs together including the registration part.

Registering Services

Now to register your services, ASPNET5 expects that your Startup class has a method called ConfigureServices, it takes a list of ServiceDescriptors, wrapped in IServiceCollection, and returns nothing (there is another form of this method that we will discuss shortly). All what you have to do is to create ServiceDescriptors for the services you want to register and add them to the list. The web app will be pick this list later and then register it with the container.

public void ConfigureServices(IServiceCollection services)
{
var serviceDescriptor = new ServiceDescriptor(typeof(IBankManager), typeof(BankManager), ServiceLifetime.Transient);
services.Add(serviceDescriptor);

// Add MVC services to the services container.
services.AddMvc();
}

Note: Create ServiceDescriptors can be little bit verbose, this is why you see Middleware using Extension methods to create these ServiceDescriptors, like “service.AddMvc()”

So how will this be orchestrated with the application start?

The following pseudo statements explain the server startup and how the Service Provider is created, the corresponding code can be found in the HostingEngine.Start method:

Note: that this post is based on the beta4 version, things have changed since then but the main behavior is the same. So adding the code here won’t add much value; pseudo code should be good enough

  1. Hosting engine will create an IServiceCollection, which is a collection of ServiceDescriptors
  2. Hosting engine will add all the services it needs to the list
  3. Hosting engine will ensure that there is a Startup class in your assembly and that it has a method called ConfigureServices
  4. Hosting engine will load this method and call it passing the IServiceCollection
  5. ConfigureServices in the Startup class will add the apps services to the list
  6. Hosting engine will create a DefaultServiceProvider (the container) and use the information in IServiceCollection to register the services to the DefaultServiceProvider
  7. Hosting engine will create the Application Builder (IApplicationBuilder) and assign the new Service Provider to the property IApplicationBuilder.ApplicationServices so it can use it further down
  8. Hosting engine will add a Middleware before giving the chance for the Startup.Configure to run, placing it to be the first Middleware in the pipeline. The Middleware is RequestServicesContainerMiddleware, which will be discussed shortly.
  9. Hosting engine will call Configure method in Startup class passing the Application Builder to build the Middleware pipeline where the Service Provider can be used through the ApplicationServices property to build the Middleware if needed

Great, the server is configured, started, and ready to receive requests. What happens now in a request? how is the dependency injection is run?

Running a Request

When the request first comes, an HttpContext will be created to be handed to the Invoke method of the first Middleware, and subsequently to all the Middlewares. But just before it’s handed to the first Middleware, the Application Builder’s Service Provider is assigned to the property HttpContext.ApplicationServices, making the application-level Service Provider available through the HttpContext for all the Middleware to use it up to their needs. Though, it should be kept in mind that this is the application-level Service Provider, and depending on the IoC container of choice, your objects might stay alive through the whole lifetime of the application if you use it.

Note: in theory, as an application developer, you should not use the Service Provider directly; if you do then you are doing a Service Locator pattern, which is advised against.

Ok then, that was an application-level Service Provider, isn’t there a Service Provider that is scoped for the lifetime of the request? Yes, there is.

In step 8 in the list above, we mentioned that the hosting engine adds the RequestServicesContaienrMiddleware Middleware at the beginning of the pipeline, giving it the chance to run first.
The code hasn’t change much for this Middleware for a long time, so I think it’s safe to put the code here :)

public async Task Invoke(HttpContext httpContext)
{
    using (var container = RequestServicesContainer.EnsureRequestServices(httpContext, _services))
    {
        await _next.Invoke(httpContext);
    }
}

Going back to the request execution, the server creates the HttpContext, assigns the application-level Service Provider to the HttpConext.ApplicationServices, and then invokes the first Middleware, which is the RequestServicesContainerMiddleware. Can you see that using statement in the Invoke method? There where the magic lies; all what it does is that it creates a Scoped Service Provider that will be disposed at the end of the request. The pseudo will be:

  1. Request is handed by RequestServicesContainerMiddleware
  2. Invoke will retrieve an IServiceScopeFactory from the application-level Service Provider via HttpContext.ApplicationServices.
  3. IServiceScopeFactory will create a scoped container (think of ILifetimeScope in Autofac)
  4. The scoped container will be assigned to the property HttpContext.RequestServices
  5. The Invoke method calls the subsequent Middlwares allowing the request to go through
  6. When all the Middlewares are invoked and the call return is back to the RequestServicesContainerMiddleware, the scoped Service Provider will be disposed by the “using” statement.

Note: RequestServicesContainerMiddleware uses a wrapper/helper class RequestServicesContainer to manage the creation and disposition of the scoped Service Provider, which is the object used in the “using” statement really

The HttpContext.RequestServices is the scoped container for the request lifetime, all the subsequent Middleware will have access to it. For example, If you check the MvcRouteHandler.InvokeActionAsync you will see that it’s using it to create the controllers:

private async Task InvokeActionAsync(RouteContext context, ActionDescriptor actionDescriptor)
{
    var services = context.HttpContext.RequestServices;
    Debug.Assert(services != null);

    var actionContext = new ActionContext(context.HttpContext, context.RouteData, actionDescriptor);

    var optionsAccessor = services.GetRequiredService<IOptions<MvcOptions>>();
    actionContext.ModelState.MaxAllowedErrors = optionsAccessor.Options.MaxModelValidationErrors;

    var contextAccessor = services.GetRequiredService<IScopedInstance<ActionContext>>();
    contextAccessor.Value = actionContext;
    var invokerFactory = services.GetRequiredService<IActionInvokerFactory>();
    var invoker = invokerFactory.CreateInvoker(actionContext);
    if (invoker == null)
    {
        LogActionSelection(actionSelected: true, actionInvoked: false, handled: context.IsHandled);

        throw new InvalidOperationException(
            Resources.FormatActionInvokerFactory_CouldNotCreateInvoker(
                actionDescriptor.DisplayName));
    }

    await invoker.InvokeAsync();
}

Note: a reminder, again, you shouldn’t need to use the Service Provider directly; try to manifest your dependencies through constructors, avoid the Service Locator pattern.

Awesome, now what if you want to substitute the default container with something like Autofac? Glad you asked, let’s see how.

Bring Your Own IoC Container

Before we start, this is a reminder that this is something to be implemented by the IoC container maintainers, not by the application developer.

To use your own container you have to implement the interfaces: IServiceProvider, IServiceScope, and IServiceScopeFactory. Implementing the interfaces should be straight forward because the interface itself is mandating what you need to do, the Autofac implementation can be used as an example.

But the subtle thing that needs to be explained that the ConfigureServices method in the Startup class has another form that the hosting engine expects, this form is expected in case the developer wants to use his own IoC container. In this form the method should return an IServiceProvider; once all the desired ServiceDescriptors are added to the IServiceCollection, the developer should create his container, register the services the way the container expects it, and then returns the container’s implementation of the IServiceProvider. The following is the code to use Autofac:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    // Add MVC services to the services container.
    services.AddMvc();

    var builder = new ContainerBuilder();

    // Create the container and use the default application services as a fallback
    AutofacRegistration.Populate(
        builder,
        services);

    var container = builder.Build();

    return container.Resolve<IServiceProvider>();
}

The AutofacRegistration.Populate registers the services the way Autofac likes, and registers the IServiceScope and IServiceScopeFactory implementations (this is only a part, check the complete code on the link):

private static void Register(
ContainerBuilder builder,
IEnumerable descriptors)
{
foreach (var descriptor in descriptors)
{
if (descriptor.ImplementationType != null)
{
// Test if the an open generic type is being registered
var serviceTypeInfo = descriptor.ServiceType.GetTypeInfo();
if (serviceTypeInfo.IsGenericTypeDefinition)
{
builder
.RegisterGeneric(descriptor.ImplementationType)
.As(descriptor.ServiceType)
.ConfigureLifecycle(descriptor.Lifetime);
}
else
{
builder
.RegisterType(descriptor.ImplementationType)
.As(descriptor.ServiceType)
.ConfigureLifecycle(descriptor.Lifetime);
}
}
else if (descriptor.ImplementationFactory != null)
{
var registration = RegistrationBuilder.ForDelegate(descriptor.ServiceType, (context, parameters) =&gt;
{
var serviceProvider = context.Resolve();
return descriptor.ImplementationFactory(serviceProvider);
})
.ConfigureLifecycle(descriptor.Lifetime)
.CreateRegistration();

builder.RegisterComponent(registration);
}
else
{
builder
.RegisterInstance(descriptor.ImplementationInstance)
.As(descriptor.ServiceType)
.ConfigureLifecycle(descriptor.Lifetime);
}
}
}

But then how will this fit with the 9 steps above in Registering Services? Well, it changes a little bit to become like this (red and strike-through’s are the changes):

  1. Hosting engine will create an IServiceCollection, which is a collection of ServiceDescriptors
  2. Hosting engine will add all the services it needs to the list
  3. Hosting engine will ensure that there is a Startup class in your assembly and that it has a method called ConfigureServices. First it will look for the form that returns an IServiceProvider, if not found then it uses the one that returns nothing
  4. Hosting engine will load this method and call it passing the IServiceCollection
  5. ConfigureServices in the Startup class will add the apps services to the list
  6. ConfigureServices will create the IoC container of choice
  7. ConfigureServices will register all the services in the IServiceCollection to the new container
  8. ConfigureService will make sure to register the IServiceScope and IServiceScopeFactory with the services (remember step 2 in Running a Request above?)
  9. ConfigureServices will create an instance of the container’s implementation of the IServiceProvider and return it
  10. Hosting engine will create a DefaultServiceProvider and use the information in IServiceCollection to register the services to the DefaultServiceProvider The Hosting engine will retrieve the IServiceProvider supplied
  11. Hosting engine will create the Application Builder (IApplicationBuilder) and assign the new ServiceProvider to the property IApplicationBuilder.ApplicationServices so it can use it further down
  12. Hosting engine will add a Middleware before giving the chance for the Startup.Configure to run, placing it to be the first Middleware in the pipeline. The Middleware is called RequestServicesContainerMiddleware, which will be discussed shortly.
  13. Hosting engine will call Configure method in Startup class passing the Application Builder to build the Middleware pipeline where the Service Provider can be used through the ApplicationServices property to build the Middleware if needed

Voila! all is ready

Conclusion

I hope by now there is no magic in how dependency injection really works in ASP.NET 5, if you have questions or comments feel free leave it in the comments section.

Tips’n Tricks

  • In order for you debug this whole process and step in the code you need to do two things:
    • Get the code by checking out the repositories from GitHub and make sure you are on one release tag (like beta4)
    • Create a web app in Visual Studio
    • Alter the “global.json” file so you add the paths to the repositories source to the “projects” key like this *
    • Now you have the code in your hands and can step through it
  • Code of interest:

ANZCoders Wrapup

Over the last week, the first ANZCoders virtual conference was taking place, the conference that you can attend in your pyjamas! Fifteen sessions over five days by twelve speakers, all voted upon by the audience themselves.

The conference was live, but it was recorded also on Youtube; every session has its own Youtube video available for watching any time. So I hear you say “Why attend live if the video is going to be available later?!”…here is why:

  1. The live Q & A: after the session the audience was given the chance to ask the speaker questions, just like any real conference; which is something that is not available for people watching the video later.
  2. The live discussion: As the speaker was running through the session, the chat channel was humming with all sorts of different opinions, supporting stories, links to resources, and there lots of laughs that made the conference even more fun! although this might sound distracting a little bit for both the audience and the speaker, but IMHO the benefits overcame the drawbacks.
  3. The people: connecting with such intelligent and passionate people was invaluable! Enough said.

The only drawback I guess was the reliability of the speaker’s internet connection; I for example lost at least 6 valuable minutes of my “IoC in ASPNET5” talk, even with my best arrangements to have proper connectivity! (Yes, you need to skip past the minutes from 2:30 to 8:30). But hey, a speaker can have a flu in face-to-face conferences as well ;).

Will I participate in a live virtual conference again? Absolutely!

So big thanks to Richard Banks for organizing the conference, and big thanks to the sponsors, the speakers, and the lovely audience who made this event a success!

 

Consultant Skills: Self-Confidence

Look into the mirror, Increase self-confidence

I have blogged before about some of the skills that a consultant should be acquainted with like Story Telling, Knowledge Depth & Breadth, and Having an Opinion, all of which I see very important. But in this post I would not hesitate to say that self-confidence is the single most important amongst them all!

Before we continue, what does “self-confidence” mean? my words of choice would be: “it is the belief someone has about his/her capability of accomplishing something that he/she hasn’t tried before“. Notice here that it’s a “belief”

Lacking Self-Confidence Is Bad

“How come you think it is the most important?” I hear you ask. Well, if you have been following my posts you will clearly see that I love bullet points, so let me list how lack of self-confidence is bad:

  • Lack of self-confidence is the chuckles and chains that the individual would willingly put around his neck, preventing himself from achieving even the simplest of goals, even if he has the potentials and capabilities; he might be smart, thoughtful, knowledgeable, and resourceful, but he will not utilize any of these treats because he thinks he doesn’t have them, nill, zero!
  • The main job of a consultant is to solve the problems of his clients; the client is clueless, confused, lost, in doubts, and he needs help, he needs someone to rescue him from the trouble he is in. Imagine yourself to be such a client, would you accept a consultant’s help if he wasn’t in a better state than you are?  if the consultant himself is not sure of his capabilities, he doubts his skills, will you still hand your problem to him to solve?    For employees it might be different; the employee might be considered as an investment the company or management might invest in, so he receives the encouragement and support to get him going, increase self-confidence if he lacks it, consultants on the other hand don’t have this luxury.
  • Even if you are not taking clients, your image seen by people and peers will be affected. People will see you as you see yourself; if you see yourself as someone who can provide solutions and solve problems, you will be looked at as such person, if you see yourself as weak, stupid, or failure then, no surprise, you will be looked at as such person.
  • Self-doubt brings depression, with various of levels, if not controlled.
  • If you find these points aren’t bad enough, then re-read the first point above

OK it’s obvious how tremendously dangerous this is, but how to solve this?

How to Increase Self-Confidence

Shall we list:

  • Acknowledge the problem; this is going to be the driver force of change; realization. You have to realise how big of a problem this is, and realise the grave effect it has on you. This is a state that we should have absolutely zero tolerance with.
  • Remember the “belief” part of the definition? you have to find a proof that supports your belief, why do you think you can’t do it? can you prove that you can’t? I would say help yourself out and whenever you are in doubt try to remember all the success stories and accomplishments you have achieved in your life that can be compared to the situation you are in, and use it as a proof that you can.
    What if you can’t find a proof of success? well, at least you don’t have a proof of failure! so you can’t be doubting yourself!
    What if you DO have a proof of failure? then you should ask yourself: “was it the same circumstances? have I changed since then? is this situation exactly like the one I failed in?” if your answer was no, and most likely it is, then this can’t be used as a proof, and we are back to the fact that you don’t really know if you can’t accomplish! if your answer was yes though, then let’s examine the next point.
  • Ask yourself “what can I do to do things differently this time? How can I change the reason that caused my previous failure, so it is not any more?” and this is a competitive advantage over confident people, you are URGED to think more, try harder, prepare better, and find a better way of doing things. Remember the turtle and rabbit race story?
  • Self-doubt and the fear of failure goes hand in hand; we think we can’t accomplish, consequently we think that if we try we will fail, and failing is a big problem, right? well, no! whenever I am anxious about something, my wise wife asks me “what is the worst thing that can happen?”; we tend to build feelings of fear, based on our implicit imagination, that are much greater than what they would really be if the failure really happens. So thinking about the worst case scenario would invalidate these fears and give them their proper size.
  • In most of the cases, the opportunity cost is much greater than the failure cost. If self-doubt prevented you from trying to take a leading position, then your loss of losing that position is greater than the loss you would have trying and failing (in that attempt); at least you will learn how to do it better next attempt. It might a simple sentence, but think about it for a minute, absorb it…what do you think?

Finally, know that this is a continuous struggle with yourself, this will never end! if you grow you will be exposed to new things, if you are exposed to new things you will doubt yourself, period! hopefully, though, with these tips and tricks you will be in control, you will use your self-doubt to your advantage and move forward successfully.

Now go and nail it!

[Image Credit: Kevin Cawley]