Preventing Open Redirect Attacks in MVC - Part 1

18 February 2015

At work today some of us went through a “secure code training” as part of our PCI compliance efforts. The training was led by a security consultant who previously did a secure code analysis on the login application my team has been building (the result of which: 2 minimal issues from static analysis that on investigation, were actually nothing). His presentation was actually on the OWASP top 10 since that and PCI have some overlap. I have some familiarity with that list since I’ve read some of Troy Hunt’s writings on the subject plus watched his Pluralsight course Hack Yourself First which demonstrates a lot of these attacks live via Fiddler. So sidenote, if you’re at all interested in the topic, definitely check out that course by Troy. Anyway, one of the OWASP top 10 is called open redirect attacks and since I spent quite a lot of effort coding against that for the login application, thought the timing was good to explore that solution in more depth.

What is an unvalidated forward or redirect attack?

Suppose you have a web page that performs some function and you take, as a querystring parameter, a URL that you will return the user to once the processing is completed. Suppose that looks like something like:


http://www.mysite.com/dosomething?Source=http://go.backhere.com

So in the above, after our page completes its work, it will send the user back to http://go.backhere.com. Sound okay? Well suppose the link I visit is:


http://www.mysite.com/dosomething?Source=http://evil.site.com

Now after our work completes, we’ll send the user to evil.site.com. So the question now is: well how on Earth would someone click such a link? Well, like several other OWASP issues, it comes down to phishing. Simply put, attackers will craft URLs in emails that look safe enough, but utilize this kind of open redirect to slip you over to a malicious site without you being aware of it. And once that happens, all bets are off. Read more about this issue directly from OWASP.

How do we prevent it?

You’ll notice a theme in the OWASP list, but the solution is: input validation. Do not blindly accept redirect URLs on your site, but compare them against a whitelist you maintain of safe redirects. Any time the redirect would be to a URL not on the whitelist, simply don’t perform the redirect. That may mean substituting a safe redirect or taking the user to an error page but whatever it is, don’t do it. And that simple solution is the topic of this blog post as I spent quite a bit of time finetuning such a solution.

The application I’m talking about provides enterprise login services against a cloud SAAS stack. The application supports the basic identity functions you’d expect plus, of course, login. But obviously a login application is not a terminal location: users pass through it on the way to their resources (once properly authenticated and authorized of course). So the first thing to establish is: how do we know where to take users back to? In our case, it comes from two places. If the user attempts to access a resource that’s protected by a login agent, the agent redirects the user to the application and provides a querystring parameter (TARGET) that specifies the absolute URI the user was attempting to access and where they should be returned after authentication. Otherwise, the user can come directly to the login application to login. Generally, that occurs when a user clicks the “Login” link on our enterprise site (powered by SharePoint) at which point SharePoint takes the user to the login application and provides the relative URI where they should be returned to in SharePoint. In point of fact, the URI is of the form /_layouts/authenticate.aspx?Source=/some/location where /_layouts/authenticate.aspx is the SharePoint out of the box authorization page and Source actually provides it with instructions on where to take the user once it’s done. Very complicated, I know, but such is life.

Anyway, so we have two different ways that we will be provided with the return URL for the user. All we need to do is validate that it’s safe, keep track of it, assist the user in logging in, and ultimately redirect the user to their return URL. Let’s take this in a few parts.

Getting the Return URL

We previously acknowledged that the first request into the login application will typically provide the return URL for the user. But at minimum, the user will be served a resource, input some values, and POST the form before the return URL will be needed. And in our Create Account use-case, the return URL will come over with the user initially, but we will take them through at least 2 pages before they complete the process and need to be redirected. So the first thing we have to decide: where will we maintain the return URL? It comes in as a querystring so of course it could be maintained there, but that’s a bit of a pain. Anytime the user submits data to the application, we have to POST it, then we have to reapply it as a querystring for a redirect to we do within the application. That can be done, but a lot of tracking. Instead, I’d prefer to just set it once at the beginning and then leave it alone (while having it be available for free when I actually want it). So the first decision: we will store it in a cookie. This is easy to set and retrieve, it’s more difficult for the user to casually modify, and there are a couple MVC constructs that can assist us in our task.

So we know we’re putting it in a cookie, how do we get it in there? Let’s start with this interface:


  public interface IUrlProvider
    {
        string GetUrl(HttpContextBase context);
    }

So given an instance of HttpContextBase, we will return a URL from it. Okay, simple enough. Our requirements now lead us to a couple implementations. First, an implementation over top of the TARGET querystring parameter:


	public class SiteMinderTargetQueryStringParameterProvider : IUrlProvider
    {
        private readonly string targetParameterName;
        private readonly IEnumerable<TargetUrlPrefix> parameterPrefixes;
        
        public SiteMinderTargetQueryStringParameterProvider()
            : this("TARGET", new string[] { "-SM-", "$SM$" })
        {
        }

        public SiteMinderTargetQueryStringParameterProvider(string targetParameterName, string[] parameterPrefixes)
        {
            if (string.IsNullOrEmpty("targetParameterName"))
                throw new ArgumentNullException("targetParameterName");

            if (parameterPrefixes == null)
                throw new ArgumentNullException("parameterPrefixes");

            this.targetParameterName = targetParameterName;
            this.parameterPrefixes = parameterPrefixes.Select(a => new TargetUrlPrefix(a));
        }


        public string GetUrl(HttpContextBase context)
        {
            var possibleTarget = context.Request.QueryString[this.targetParameterName];

            if (string.IsNullOrEmpty(possibleTarget))
                return null;

            var urls = this.parameterPrefixes.Select(a => a.AsAbsoluteUri(possibleTarget));

            return urls.DefaultIfEmpty(null).FirstOrDefault(a => a != null);
        }

        public class TargetUrlPrefix
        {
            private readonly string prefix;
            private readonly char smSeparator;

            public TargetUrlPrefix(string prefix)
            {
                this.prefix = prefix;
                this.smSeparator = prefix[0];
            }

            public bool StartsWithPrefix(string possibleTarget)
            {
                return possibleTarget.StartsWith(this.prefix, StringComparison.OrdinalIgnoreCase);
            }

            public bool IsWellFormedAbsoluteUri(string possibleTarget)
            {
                if (!StartsWithPrefix(possibleTarget))
                    return false;

                return Uri.IsWellFormedUriString(Uri.UnescapeDataString(possibleTarget.Replace(this.prefix, string.Empty)), UriKind.Absolute);
            }

            public string AsAbsoluteUri(string possibleTarget)
            {
                if (!IsWellFormedAbsoluteUri(possibleTarget))
                    return null;

                var absoluteUri = Decode(possibleTarget);

                return new Uri(absoluteUri, UriKind.Absolute).AbsoluteUri;
            }
          
            private string Decode(string smEncodedTarget)
            {
                var sb = new StringBuilder();
                string strippedPrefixUrl;

                strippedPrefixUrl = smEncodedTarget.Substring(4, smEncodedTarget.Length - 4);

                for (int i = 0; i < strippedPrefixUrl.Length; i++)
                {
                    if (strippedPrefixUrl[i] == smSeparator )
                    {
                        sb.Append(strippedPrefixUrl[i + 1]);
                        i++;
                    }
                    else if ( strippedPrefixUrl[i] == '%')
                    {
                        sb.Append(Uri.UnescapeDataString(strippedPrefixUrl.Substring(i, 3)));
                        i = i + 2;
                    }
                    else
                    {
                        sb.Append(strippedPrefixUrl[i]);
                    }
                }

                return sb.ToString();
            }
        }
    }

Some of the crazy logic you see in there is because the login agent (SA SiteMinder) will encode the URLs in some rather crazy ways so our provider has to account for that and decode it (and this is beyond typical HTML encoding). But otherwise, you’ll see the code just looks for a querystring parameter called TARGET, assumes it’s in the form “-SM-[URI]” or “$SM$[URI]”, extracts the URI, and returns it from the provider. If there is no such parameter or the value of it is not in the expected format, it returns null. It’s true that returning null is a rather unsafe approach, but here we’re somewhat mirroring the concept of how MVC value providers operate.

URLs provided by SharePoint will be in the form ?ReturnUrl=/_layouts/authenticate.aspx?Source=/some/page. The key is the ReturnUrl parameter, not necessarily its value (although that is pretty static to be honest). So we need a provider that will look for a ReturnUrl querystring parameter and either return what it finds or null if nothing is found:


	public class ReturnUrlQueryStringParameterUrlProvider : IUrlProvider
    {
        private readonly string returnUrlParameterName;

        public ReturnUrlQueryStringParameterUrlProvider()
            : this("ReturnUrl")
        {
        }

        public ReturnUrlQueryStringParameterUrlProvider(string returnUrlParameterName)
        {
            if (string.IsNullOrEmpty("returnUrlParameterName"))
                throw new ArgumentNullException("returnUrlParameterName");

            this.returnUrlParameterName = returnUrlParameterName;
        }

        public string GetUrl(HttpContextBase context)
        {
            var returnUrl = context.Request.QueryString[this.returnUrlParameterName];

            if (string.IsNullOrEmpty(returnUrl))
                return null;

            if (!Uri.IsWellFormedUriString(returnUrl, UriKind.RelativeOrAbsolute))
                return null;

            return returnUrl;
        }
    }
	

You can see this provider is pretty straightforward, just looks for the parameter, ensures it’s a valid URI string, and returns it. So we’re all good here, but there’s a couple cases we hvaen’t covered. What if there is no return URL or target parameter specified, what should we do? And what if there’s none specified, but the cookie already has a value (such as after a POST has occurred)? With no more suspense, we end up with a couple more providers to round things out:


	public class ReturnUrlCookieUrlProvider : IUrlProvider
    {
        private readonly string returnUrlCookieName;

        public ReturnUrlCookieUrlProvider(string returnUrlCookieName)
        {
            if (string.IsNullOrEmpty(returnUrlCookieName))
                throw new ArgumentNullException("returnUrlCookieName");
            
            this.returnUrlCookieName = returnUrlCookieName;
        }

        public string GetUrl(HttpContextBase context)
        {
            var returnUrlKey = context.Request.Cookies.AllKeys.FirstOrDefault(a => String.Equals(this.returnUrlCookieName, a, StringComparison.OrdinalIgnoreCase));

            return (returnUrlKey == null) ? null : context.Request.Cookies[returnUrlKey].Value;
        }
    }
	
	public class CompositeUrlProvider : IUrlProvider
    {
        private readonly IList<IUrlProvider> providers;

        public CompositeUrlProvider(params IUrlProvider[] providers)
        {
            if (providers == null || !providers.Any())
                throw new ArgumentNullException("providers");
				
			this.providers = providers.ToList();
        }

        public string GetUrl(HttpContextBase context)
        {
            var urls = providers.Select(a => a.GetUrl(context));

            if (urls.All(a => string.IsNullOrEmpty(a)))
                return null;
            else
                return urls.First(a => !string.IsNullOrEmpty(a));
        }
    }
	

So now we have providers that will retrieve URLs from our two querystring parameters and our cookie, and we can roll them together into a single composite provider. A few things are noticeably absent here: we do not have a provider that provides a “default” return URL nor have we accounted for any validation. Both are things we’ll need and you could even argue that the first (a default URL provider) would fit in nicely here, but we went a slightly different direction as you’ll see.

Validation Time

Our providers give us easy access to the return URL for the user based on the active request, but simply put, the user (or rather their input) cannot be trusted so we need to validate it to decide if we will use it or not. We first need to make a critical decision: how will we deal with URLs that could be both relative and absolute? The SiteMinder agent will always provide absolute URIs, but SharePoint will provide relative ones. Since we happen to know that the login application will not be deployed on the SharePoint servers, the relative URLs are simply not an option (at least by themselves) so we need to ensure that all return URLs are in absolute form in addition to being safe to use. Let’s define this interface:


	public interface ISafeUrlRule
    {
        bool CanApply(string url);

        Uri GetSafeAbsoluteUrl(string url);
    }

An ISafeUrlRule provides a way to get a safe absolute URL from the string representation of a URL. The rule also provides a way to assess if the rule will apply or not should that come in handy. Here’s the most basic rule:


	public class SafeAbsoluteUrlRule : ISafeUrlRule
    {
        private readonly IEnumerable<string> safeAuthorities;

        public SafeAbsoluteUrlRule(IEnumerable<string> safeAuthorities)
        {
            this.safeAuthorities = safeAuthorities;
        }

        public bool CanApply(string url)
        {
            if (!Uri.IsWellFormedUriString(url, UriKind.Absolute))
                return false;

            var uri = new Uri(url, UriKind.Absolute);

            return safeAuthorities.Contains(uri.GetLeftPart(UriPartial.Authority));
        }

        public Uri GetSafeAbsoluteUrl(string url)
        {
            if (!CanApply(url))
                return null;

            return new Uri(url, UriKind.Absolute);
        }

        public IEnumerable<string> SafeAuthorities { get { return this.safeAuthorities; } }
    }

The SafeAbsoluteUrlRule maintains a list of safe authorities (http://www.mysite1.com, http://www.mysite2.com, https://www.mysite1.com) and it will simply compare the authority (including protocol) against that of the url parameter (assuming the url was previously validated as being a well-formed URI); if the url is at a safe authority, the rule simply returns the URL as a Uri. Otherwise, it returns null. That’s it, pretty simple, and this will work fine for our SiteMinder-provided URLs. Now let’s get a little more exotic: SharePoint return URLs. We said already that SharePoint will give us relative URLs, but since we have multiple SharePoint farms (the enterprise farm plus some sub-farms for partner organizations we work with), it’s not as simple as using a static base authority. Really what we want to do is figure out what authority they came from and send them back to the URL there. Enter UseHttpRefererAsDefaultAuthoritySafeRelativeUrlRule.


	public class UseHttpRefererAsDefaultAuthoritySafeRelativeUrlRule : ISafeUrlRule
    {
        private readonly Func<HttpContextBase> httpContext;
        private readonly string defaultAuthority;
       
        public UseHttpRefererAsDefaultAuthoritySafeRelativeUrlRule(string defaultAuthority)
            : this(defaultAuthority, () => new HttpContextWrapper(HttpContext.Current))
        {
        }
      
        public UseHttpRefererAsDefaultAuthoritySafeRelativeUrlRule(string defaultAuthority, Func<HttpContextBase> httpContext)
        {
            this.httpContext = httpContext;
            this.defaultAuthority = defaultAuthority;
        }

        private string GetRefererOrDefault()
        {
            var referer = this.httpContext().Request.Headers["Referer"];
            
            return referer != null ? referer : this.defaultAuthority;
        }

        protected virtual ISafeUrlRule CreateRelativeRule()
        {
            var authority = new Uri(GetRefererOrDefault(), UriKind.Absolute).GetLeftPart(UriPartial.Authority);

            return new SafeRelativeUrlRule(authority);
        }
      
        public bool CanApply(string url)
        {
            return CreateRelativeRule().CanApply(url);
        }

        public Uri GetSafeAbsoluteUrl(string url)
        {
            return CreateRelativeRule().GetSafeAbsoluteUrl(url);
        }
    }
	
	 public class SafeRelativeUrlRule : ISafeUrlRule
    {
        private readonly Uri safeUrlAuthority;
       
        public SafeRelativeUrlRule(string safeUrlAuthority)
        {
            this.safeUrlAuthority = new Uri(safeUrlAuthority, UriKind.Absolute);
        }

        public bool CanApply(string url)
        {
            return (!string.IsNullOrEmpty(url) && Uri.IsWellFormedUriString(url, UriKind.Relative));
        }

        public Uri GetSafeAbsoluteUrl(string url)
        {
            if (!CanApply(url))
                return null;

            return new Uri(this.safeUrlAuthority, url);
        }

        public Uri SafeAuthority { get { return this.safeUrlAuthority; } }
    }

No real surprises here. First, the rule is provided with a way to access the current HttpContext. We provide it as a lambda since Windsor will generate the component for us and the HTTP context is not always in its full-formed state when that occurs. Plus we may like to register the rule as a singleton. The rule relies on the value of the “Referer” HTTP header to provide the authority, but the rule checks the authority against its internally held whitelist; if it’s not safe, it uses the default authority (basically the SP enterprise farm) instead. The rule then defaults to the SafeRelativeUriRule to actually complete the validation and produce the URI.

Now that we’ve got the rules (and we actually have some special one-off rules I’m going to omit), let’s raise things up a level to a policy:


	public interface ISafeUrlPolicy
    {
        Uri GetSafeAbsoluteUrl(string url);

        bool IsSafe(string url);

        Uri DefaultUrl { get; }
    }
	
	public class SafeUrlPolicy : ISafeUrlPolicy
    {
        private readonly IList<ISafeUrlRule> rules;
        private readonly DefaultUrlRule defaultUrlRule;

        public SafeUrlPolicy(string defaultSafeUrl, params ISafeUrlRule[] rules)
        {
            this.rules = rules.ToList();
            this.defaultUrlRule = new DefaultUrlRule(defaultSafeUrl);
        }

        public bool IsSafe(string url)
        {
            return this.rules.Any(a => a.CanApply(url));
        }
      
        public Uri GetSafeAbsoluteUrl(string url)
        {
            var satisfiedRule = rules.FirstOrDefault(a => a.CanApply(url));

            return ChooseRule(satisfiedRule).GetSafeAbsoluteUrl(url);
        }

        private ISafeUrlRule ChooseRule(ISafeUrlRule selectedRule)
        {
            return selectedRule ?? this.DefaultRule;
        }

        public Uri DefaultUrl { get { return this.defaultUrlRule.DefaultUrl; } }

        public IList<ISafeUrlRule> Rules { get { return this.rules; } }

        public ISafeUrlRule DefaultRule { get { return this.defaultUrlRule; } }
    }

The policy is simply a collection of rules plus a special default rule to apply if no other rules work. There are other ways we arguably should have done this (such as just making it a composite with a default fall-through rule), but we liked how explicit this was. The policy is what’s provided to other components in the application that need to evaluate if a URL is safe per our official policy.

Since this post is already too long, we’ll call this Part 1 and stop here. But come back for Part 2 as we still have some key items to cover: how do we link the URL providers and the safe URL rules together and get them to persist the return URL through a cookie? How will we manage the cookie (when to update it, when to not, etc)? And ultimately how will we make the return URL available in a frictionless fashion in our application? All these and more will be explored next time.


How Software Development is like Veterinary Medicine

15 February 2015

My dad is a veterinarian at Three Chopt Animal Clinic in the Richmond area and has been for over 35 years. When I was growing up, I observed quite a lot of veterinary procedures, far more than the average person would. But I never really seriously considered going into the field, with the most common reason (usually given about me) is my general squeamishness related to blood. I happen to love quotes and like most fields, the medical field is rife with them. One of my dogs was unfortunately diagnosed this week with an incurable liver tumor so his time with us will be short. When I was talking to my dad about the situation, he used a surgical axiom and it struck me how applicable it was to software development, particularly debugging and bug triage, which is consuming me as we try to wrap up the new identity platform integration. In that spirit, this post will look at some common medical idioms and how they also apply to the seemingly unrelated field of software development.

A chance to cut is a chance to cure

For surgeons, this axiom is exactly what it sounds like: giving the surgeon a chance to perform the procedure is giving the patient a chance at a cure. And similarly, given a problem, most software developers leap to a code solution. When time is tight and shipping dates are on the line, I think most of us believe that relaxing code freeze and giving us a chance to do what we do best will save the day. And sometimes that’s true. In my current project , at least half a dozen times and counting I’ve had to pull out what seemed like a miracle to work around the vendor’s design or a bug in the platform or just do something to help move us along. And most of the time, I asked to be given the chance to write some code in contravention of code freeze or other similar process guidelines. And that’s worked out for us. But sometimes the cure does not require surgery or more code. And sometimes the situation cannot be salvaged. And I would be remiss if I didn’t add the flip side axiom: a chance to cut is a chance to kill. As developers, there is always the chance that we will introduce a bug so we have to weigh the risk of introducing a bug against the benefit of authoring code when a situation is outside the bounds of normal processes.

All bleeding eventually stops

I’m famous at work for saying this one in reference to struggling projects and I heard it from my dad who learned it from his boss when he started practicing veterinary medicine. This phrase is a bit of gallows humor: if a patient is bleeding, either a veterinarian will control the bleeding and save the patient or the patient will bleed out (at which point it will stop). This phrase always reminds me that a tough situation will always come to an end. Sometimes that means we’ll salvage the situation, sometimes it will be ended with an unfortunate result. But all we can do is focus on trying to save it while accepting that things might ultimately break against us.

If it's worth taking out, it's worth turning in

In medicine this refers to the practice of turning into pathology any mass that is removed from a patient for follow-up analysis related to cause and effect. In this day and age when unit testing is an almost universally accepted practice, I liken this saying to writing unit tests in response to bug reports. The most common question I get from developers at work is either “How can I start writing unit tests?” or “I’m in a code-base, where should I start writing unit tests?” I usually ask “Are there bugs you’re fixing?” Bugs to me are the easiest way to start writing unit tests. If you get a bug report, write a test that proves the bug exists. Then all you’ve got to do is write the code to make the test pass. That’s it. I’m training a guy on my team as .NET developer. I got asked to look into a situation where data on-screen in an internal LOB application was being duplicated (but in the database it was fine). This code was a few years old so it took quite a bit of effort to write a test to prove the bug in the code (a number of hours in fact). But once I did that, I told my colleague “All you’ve got to do now is make the test pass.” I could have solved the problem in 5 minutes once I had the failing test, but I knew this would be a big win for him to make an actual production change in the code-base. I typically look askance at someone who tells me they fixed a bug, but didn’t check in any unit tests: how do they know it’s solved? How do they know they didn’t introduce a new one? So similar to submitting excised masses to pathology being a routine practice, developers should do likewise for bugs.

Age is not a disease

In medicine, just because a patient is of advanced age it does not automatically disqualify certain treatments. Yes, age can and is a factor, but it’s not the only one. Similarly, just because a code-base has been around a long time, that does not mean it needs to be replaced with a newer one in a more contemporary technology. At one company I worked, I supported all the systems for document collection. Most of these were Windows services that read and loaded documents received on feeds we purchased, such as EDGAR documents from the SEC. One of our most important ones was called PR Loader, which loaded press releases from a feed supplied by Acquire Media. Press releases were of the utmost importance to us for two reasons. One: press releases by SEC regulation are where companies break news and it was worth big money to us (and our clients) to have access to them as fast as posssible. And two: we supplied investor relations solutions (i.e. a company’s IR website) to some clients and it was a matter of regulation (and subject to fines) if press releases didn’t reach their site at a specific time and these same PRs came to us on the Acquire feed (don’t ask why they weren’t uploaded directly). And the PR Loader was written in a very old language that they told me was called X++ (which I’d never heard of). The loader was written over 10+ years previously and hadn’t been changed in about 3 years when my team was doing a project that called for modifying it. We batted around the idea of rewriting it in C#, but after analysis, the changes we needed to make could be added as a separate module without touching the core logic. And since management did not feel comfortable taking the risk of rewriting it (or the time it would take plus the testing), it was faster to get a developer up to speed on the foreign language and have the one remaining developer (who at the time was a senior IT directory) do a thorough code review. There is a lot of value inherent in legacy codebases. The cost of producing them originally has been amortized over a long period and represents tremendous value to the enterprise: that should not be causally discarded just because of its age.

Never be first to use a new treatment, or last

Most developers do not work at companies whose goal is to be on the bleeding edge of technology. The cost of being on the bleeding edge is you will do a lot of bleeding and for most enterprises, that is simply not necessary. For most enterprises, if there is little documentation or knowledge out on the web on a topic, it should really give pause whether adopting a technology as part of a project is a sound business decision. Perhaps it is, but often the lessons learned from it will be painful and expensive. But on the other hand, being too late to pick up a new technology puts you far behind the competition. If my company was still producing ASMX web services, it should really give us pause given the plethora of easier to use, widely accepted options out there. The real moral of this lesson: consider the full ramifications of adopting a technology and make sure you roll into its consideration the support and knowledge available out in the world about it.

One CT scan is worth a thousand neurologists

Speculation is free and easy. And in many ways, fun. But seldom does it gain us what we really need to solve a difficult situation: data. In the course of implementing the new cloud SaaS identity platform, its performance has at times not been what we expected or can accept. And the vendor has given us a lot of stories about why it is the way it is or that it will be faster in production, or even that it’s faster than we think. And there were a lot of meetings to discuss everyone’s opinions on the matter. But I did something simple: I got us data. I added event logging on top of the client proxies that simply times all calls to the vendor’s services and writes them (plus the method call name) into the event log. From there it was easy to download the event logs and write an application to parse out the times and calculate standard descriptive statistics on them. And they were eye-opening. So much so that our IT management passed them to the vendor’s management with a simple question: how do you explain these numbers? And they simply couldn’t. Thus began weeks of load testing and performance optimizations on their part with the data my login application could gather to show whether it was getting better or not. So when there’s argument and speculation, collect some data and at least argue about something rigorous.

There are plenty more medical sayings I could include here and maybe I’ll take another run at this one day, but hopefully you’ve enjoyed this only-slightly-joking comparison between the fields of medicine and software development.