Explaining Douglas Crockford’s Updating the Web

This is my attempt to extract his slides from his presentation and annotate them slightly. I write my interpretations under the headings. Watch the full talk here.

What’s wrong with the web?



He believes that the security vulnerabilities are due to the overall complexity.


Key / Value Pairs


Request / Response

Certificate Authorities

Not trust worthy, vulnerable


Not for applications…really for describing technical documents


XSS attacks

Document Object Model

The worst API, very insecure


Awkward and not intended for application usage


Hot mess, it is pretty terrible but there are some good parts

Many have tried

  • Microsoft, Adobe, Apple, Oracle, many more
  • In most cases, the technology was much better
  • In most cases, the solution was not open
  • There was no transition

Upgrade the Web

Keep the things it does well.

Move over and go down a new path for things that are currently vulnerable still.

HDTV transition was possible with set top box

Helper App

Used to open external protocols that were not supported by the browser. For a new protocol “web”, we will have a new way to execute applications

Transition Plan

  • Convince one progressive browser maker to integrate.
  • Convince one secure site to require its customers to use that browser
  • Risk mitigation will compel the other secure sites
  • Competitive measures will move the other browser makers
  • The world will follow for improved security and faster application development
  • Nothing breaks!

Strong Cryptography

  • ECC 521
  • AES 256
  • SHA 3-256
Built upon paranoid levels of crypto beyond what is deemed needed by todays standards. Keeping things secure and future proof.

ECC 521 public keys as unique identifiers

No more passwords, no more usernames. This is you.

Secure JSON over TCP

HTTP is limited and not really needed for this. JSON can be encrypted and pushed over the wire asynchronously.

web://  publickey @ ipaddress / capability

It’s not pretty, but its clear. Take the certificate authorities out, and keep

Trust Management / Petnames

The way to try to make the long and completely foreign schema identifiable to the user. The initial relationship like going to “amazon.com” would come from search engines or directories. The idea being once you know about the site now there is a “relationship” that you wish to maintain. The idea of just typing in a domain name like “money.com” which was a hit or miss sort of thing would no longer really be possible.


More than a sandbox…only has access to what is granted.

Cooperation under mutual suspicion

I think here he was saying that assume that everything is potentially malicious. All applications can work together based on the API that they allow.

JavaScript Message Server / Qt

Isolated components entirely only sending JSON back and forth. He wants to use something like NodeJS but with better security in place to handle the messaging. This would be talking to the remote server and perhaps the individual applications. This is somewhat like a AMQP bus that is secure. Qt is the interaction and rendering framework that is very widely used. I’m inclined to saying that at this point in time if you want to approach things this way let’s not limit the potential. I can say why not allow Qt at a minimum, or a JVM that can plugin. Essentially creating a platform to build an application that is web based.
He didn’t go into too much detail here at all. The only thing he really mentioned was the clean separation this provides. Qt would handle the rendering of content as well as user interaction. The messaging bus would transfer the content.
I’m thinking that taking his Vat approach we can almost describe a secure web based ecosystem. He didn’t discuss how this would work in anyway but I can propose a possible design.
I’m thinking that “applications” have dependencies but they aren’t included in your code at all. You can refer to the original repository of that dependency or bundle it if you want to. The idea is that the applications will in fact “live” in the web but are installed into the user’s browser much more like an extension on Chrome is. They will have versions. When you have updates to be pushed out part of the application checks a specific address for updates and performs that update accordingly.
I’m inclined to say that much like an operating system this platform can have a standalone application as well as an application that allows other applications to interact with it. Let’s say you have an Ebay application. You want to by things. You have the Ebay application installed. Let’s say that you also have your bank application let’s say Well Fargo and that is installed. There is a PaymentProvider interface that can be implemented that defines what is necessary to provide payments to other applications. When you want to pay for your item you won on Ebay, you can choose one of your applications that implement the PaymentProvider interface.
I’m not entirely certain but I believe that the intention was that the JS message server may be more than just a message bus, more like the front-end application would  be written there. The business logic would live on the message end, and it would broadcast messages to the Qt side for rendering updates etc. Assuming I am correct I think we can play around with this more and refine it.
I really like the application ecosystem concept. To pull that off, come up with logical interfaces and ways of interaction will not be easy. This is however, a tremendous improvement from the chaos that we have today. We have an inter-connected system that talks together in many different ways but in fact has very little security and clear cut boundaries in place. OAuth2 does have the notion of granting certain roles for different user types. This is a start, but to allow applications to collaborate a more descriptive mechanism is needed. This concept is going to be essential in the development of IoT technologies.

The Old Web: Promiscuity

The New Web: Commitment

The mantra for this. The old web will remain. Pretty sure that isn’t just for the transitional period. Rather, for “finding” new content, and what we call “Browsing the web” we would use the old web. Once we establish a relationship with a site we we will use the new web to maintain that relationship. I’d like to say that perhaps you can think about it where HTTP would be used for insecure content and some sites would “switch” to HTTPS for secure content, so too here.

There’s nothing new here

No new technology at all. Just bringing current technologies together.


TypeScript: Much more than having “closure”

JavaScript or ECMA Script has a community that is always finding ways to make JavaScript “suck less”. Either easier to write, deploy or test. What I have never seen is an attempt to make JavaScript more like “Java”. No language is the magical unicorn perfect for every single situation, only a novice says stupid things like that. Yes, JavaScript is messy, odd and confusing. It’s API is funky and sadly inconsistent from browser to browser. As the content we develop shifts more and more to full blown applications and not mere static content more conventions are needed to ensure that JavaScript protects itself.

TypeScript which was pioneered by Microsoft (yup) since 2012. It has been quite well received by the community. The AngularJS team actually abandoned their development of a similar technology in favor of TypeScript.

Type checking for those who come from Smalltalk and Python they are going to look annoying and cluttered…but that that simply isn’t a priority. Readable code is nice and especially useful to quickly understand…but at the same time it is equally as important if not more important that the developer actually follows the guidelines that they have put into place and not introduce errors that may be hard or even impossible to detect.

Java is verbose, there is no denying it. Java 7 made it possible to omit inferred generic types from a variable’s initialization and that does clean things up a bit. My personal feeling is having greater control trumps speed every time in enterprise applications. For small start-ups that are understaffed and overworked time is of the essence, but that is a different story altogether.

Gradual typing with smart inferred type checking is a very nice balance that should appease those with statically types backgrounds and beyond. TypeScript’s is quite refreshing. The addition of the interface adds an additional dimension to TypeScript. Interfaces in Java always yield a class implementation. In TypeScript you don’t actually need to make a “class” to take advantage of interfaces. They are enormously useful for specifying function parameters.

I will be writing more on TypeScript soon, but start using it now. Ultimately, this is a better way to develop, ending up with a much less error prone code base that requires little to no change beyond “js” to “ts” (and compiling).

Angular 2 – Trial Run

I’m currently still in EST in SF at the Angular U conference. I figured I would give ES6 and Angular2 a try with the official documentation before I hear the keynotes and all.

I started from the “Quick Start Guide” (https://angular.io/docs/js/latest/quickstart.html). Sadly right away I found mistakes… The documentation says all that you need to install is the angular2 TypeScript definition. When the compiler runs it turns out you need a number of additional definitions in order to make the code compile:

Maybe the version of the guide hasn’t been updated to reflect changes in Angular.

Well anyways, after this the script did in fact compile and was fairly trivial. Now for my rant. I really love AngularJS. After developing a lot of applications with jQuery and getting fed up with the fact that there was no structure to my applications I set out to look at the various libraries and frameworks that best fit my needs and the demands of most of the projects I work on.

AngularJS was the least “preachy”, most overall functional and forward thinking framework. You didn’t need to embrace any philosophies, file structure, really much of anything. The only mantra that I associated with Angular is no DOM manipulation in anything other than a directive.

After developing with Angular 1.x for a nice chunk of time I have discovered that there is room for improvement and simplification.

Here is my brief list of issues with Angular 1.x. (Some of these are more limitations in its utilization and less issues with the framework directly)

  1. Dynamic modules – Right now officially if you want to use Angular you need to load “all” of your modules at load time in order to use them. For large applications this is not only inefficient, but simply awful. For “websites” this is fine, but full blown web applications may be huge and if they are built to be single page applications you want the entire application to be rendered using one base HTML page. For large applications I use RequireJS to dynamically load needed libraries and scripts as needed. There are 3rd party libraries that dynamically resolve the angular scripts and trigger digests to propagate throughout the application and mix-in the newly loaded modules. This works fine but its a hack at best. Which leads to the next issue.
  2. Config phase restrictions – The config phase of the application is very logical. You have access to the raw modules and are able to modify them as needed prior to initialization. This is reminiscent of Spring Framework for Java that utilizes @Configuration classes to declare the Java Beans. This is performed prior to the dependency injection process which had greatly inspired Angular. Where it lacked was especially with my first qualm. No third party module was able to dynamically load and be able to affect the config phase of the application. For setting up the routing which is one of the core components of a web application, this is a very crucial step.
  3. Directives are overly complex – Everyone says that the two way binding of Angular is what makes it special. They are wrong, directives is where the power of Angular shines. Two way binding is the obvious outcome with a MVC architecture trying to truly separate the application domains. Scope isn’t super complex. I do think some of the restrictions and subtleties of directives make them very awkward and confusing. While I understand the notion that only a single isolated scope can exists on a single element, it can make many directives difficult to work with. The need to manually invoke digests using $scope.$apply because Angular didn’t know otherwise was really messy and almost hackish. I think this was needed because of the lack of support for native Object.observe functionality.
  4. $scope.$watch – If you are dealing with a large application you will want to limit the number of watches you use in your application. I try to avoid them as much as possible. They will consume memory and affect performance. Because the Object.observe function has not been adapted by all browsers Angular needs to perform dirty checking which can be expensive. This results in performance being affected and you are forced to use Angular’s broadcast system.
  5. Broadcast can be improved – Avoiding the scope.$watch when possible forces you to use some sort of event propagation system. Angular has its own $broadcast and $emit calls that send data down and up (respectively through the scope) on the routing key specified. My biggest issue is not as much with the way that the broadcast system works, but rather that it is too limited. I want to see an actual AMQP style event bus that can queue events/messages and use actual routing keys much like you find in RabbitMQ. I have actually developed my own library (https://github.com/CyberPoint/eventBus) that is a JavaScript event bus. It doesn’t deal with Angular scopes at all, but it could. I find being able to use the dot notation hierarchy is just as effective as scope alone.

I hope to post a follow-up entry with how Angular2 addresses these items.

Who are you? – Identifying yourself, from a security perspective

They say you are what you eat. I think that you are whoever you seem to be plus who you really are. Others perception of you while not truly important may attribute to the scope of “who you are”.

Who are you?

In a doctors office they would start off with questions regarding name, address, gender, family, and then get into activities you do. They are attempting to triage you based on your lifestyle, the activities you perform and your genetic history. There is obviously merit to this as it is certainly a strong factor in well being. The car you drive, the clothes you wear…while they own’t affect your health, they certainly factor in to how others perceive you. The way you walk, the way you curse (or not), is it rude to text while talking to someone else. All of these things come together to form an image, you.

Let’s explore these relationships and how understanding them can help identify and understand “you” the best we can.

1. You are a person.
2. Your gender is male.
3. You have dark hair.
4. You wear glasses.
5. You are left handed.
6. You live in Baltimore.
8. You are married.
9. You have two children.
10. You work in Baltimore.
11. You drive to work in a car.
12. You drive a sedan.
13. You own a mobile phone.
14. You are a Software Engineer.
15. You enjoy solving challenging problems.
16. You enjoy classic rock.
17. You are a passionate person.
18. You talk loudly.
19. You do not like hot weather.
20. You like to eat blueberries and do not like bananas.

Okay, so these are all true observations about myself. Let’s analyze this list for a second. Most of this list can be broken up into categories:

1. Observable physical attributes
2. Observable personality traits
3. Family members
4. Possessions
5. Preferences and opinions

I would call all of these attributes “core” attributes. They can change over time, like I may drive a different car, or own a different phone. Ultimately this list would be up to date and relevant.

There is a new buzz word being used, IoT or Internet of Things. This notion isn’t a new idea…just like the “cloud” isn’t a new idea. IoT emphasizes the relationships between objects that do not need human interaction. A prime example could be a door that has a special lock that is linked to your mobile phone and unlocks the door when you are within a certain proximity to it. Most of these items to date have been more about convenience and have not really been adopted by the layman.

I think that IoT can be utilized to fill in the blanks between our lives in more ways than you might think. Combining the proper IoT devices and highly advanced software you can build an ecosystem that can make your security and connectivity as simple as snapping your fingers.

I have a phone a work, my mobile phone when I’m on the go, and a phone at home. Imagine that when I am work all of my calls were routed to my work phone. When I am on the go, all to my mobile phone, and when at home all calls routed to my home phone. Aside from a nice convenience, this buys you a lot more. That call never rings at work and therefore no one can answer it for you.

Replace a phone call with my computer. I have one at work, and at home. When I’m at home my work computer is locked and home computer is unlocked. When at work, my work computer is unlocked and home computer is locked.

Now replace a computer with a virtual account like your bank account. When the user is “you” you have access to your account. Somebody else doesn’t have access to your bank account.

Today, you use things like inputted secret credentials to authenticate yourself. Since you know this secret information you must be the account holder. Therefore, anyone who knows this secret information may access your account.

Additional precautions have been added to further lock down your account. You need your smartphone in order to receive a code in addition to your secret credentials. Not only do you need to know the secret information, but also have access to your phone. This is an obvious step in the right direction, but certainly makes it more difficult for “you” to access your account. Obviously, to date the extra step has been worth the added security measures to prevent unauthorized access. What if you could just say to your bank account…it’s me let me in!?

Let’s take what we have already established about your core attributes and what we know about secret credentials. What if we could take properties from the five categories we listed above and use them to build a signature that would clearly identify you, and no one else.

Let’s pretend that we walk around with a special bleeding edge recording device that captures all sorts of information for a month. This device takes everything and categorizes its data into these five different categories. It breaks down that data into a knowledge database that has facts and assumptions. Associated with each assumption may be a corresponding confidence, expressing the level of certainty of each assumption. Certain types of facts may also have confidence levels, perhaps this fact was observed but only rarely or special circumstances. Assumptions may have been suggested based on facts that haven’t yet reached the threshold of a fact.

Next time you want to access your bank account instead of logging in with your secret credentials and multi-factor code, what if you provided your signature? After you walked around with this recording device and the data was converted in a knowledge database that generated a signature. This signature is a representation of the knowledge about you. Now when you want to access your account you need to satisfy the knowledge base to produce a compatible signature.

What is this signature?
How is it derived from the knowledge base?
How do you produce a valid signature that is compatible with the initial signature?

We said earlier that there will be a confidence associated with facts. Assumptions are assertions that are less than a fact but may be true.

If you asked me to write down a list of five items that identify you with your core attributes, I would most likely respond with some version of the list of twenty.

– Location is easy…high confidence
– Certain attributes change, that would be specified in their definition and taken into account according to the nature of how they change
– Data feeds from other “people” can be linked into yours like the next evolution in social networking
– I may be acting slightly different, but because I am sitting here with my son, and daughter I must be me. Using data from other people in conjunction to your own data. Data is published to granted parties for consumption.

You are what you do – Identification based on behavior

Thinking about the desire for a password-less society. When it boils down to it there are a few major leagues of password security.

1. You need something physical that only the owner would possess
2. You need some sort of knowledge that only the owner would posses.

We are familiar with the first and second one. The first can be a simple lock and key. The second a username and password.

The third and less common and much more difficult to achieve is the password that isn’t a password, rather that which can verify that you are “acting” or doing something the same way that the authenticated party would. There are movies that use voice recording and match the voice signature against the authenticated parties known voice. I’ve read articles about detecting a distinct electrical signature that the owner gives off unique to himself. I’ve also heard of individual keystroke patterns much like handwriting recognition.

I had written about an idea that learned what websites you went, your purchase history, radio history, Netflix, etc.. essentially giving it as much as data as possible. All to use to train a model to use to authenticate yourself with predictive algorithms.

I like this idea, but it’s really complicated and will require significantly sophisticated models.

One additional factor that has not been mentioned is whether or not the authentication is occurring according to the account holder’s will or against their will. If an account owner is held at gun point or some of situation that would threaten their life or that of a loved one, they may give up credentials to access the sensitive information. For some things that is obviously okay and the “smart” thing to do. For other things, like matters of national security some may say that giving that information up is so damaging that they would not want to divulge this information even when their life is being threatened.

It is an unfortunate but real situation that certain types of data may have. A security mechanism would be ideal if it could prevent the account owner from authenticating even if they have “given up” and are trying to safe their life…the data may not be compromised no matter what and a safeguard must be in place.

We can utilize the human factor to add additional layers of security. Biometric data such as heart rate, the account holders posture, their walking gate, their speech patters, hand gestures. All of these charcteristics can be used to identify anxious and unusual behavior. If we are dealing with a case of torture certainly their will be tell tale signs.

This is obviously an extreme yet real case one that I used to help illustrate a point. In extreme scenarios even the best trained soldiers will react under pressure. I think that with a well calibrated “mechanism” using a multitude of sensor data a baseline can be established to identify a user. This could not only identify the user but also identify certain behaviors, moods and reactions of the user.

Let’s take facial recognition. Utilizing a few dozen positions on the user’s face measuring the distances and locations of certain parts of the face can yield a very accurate model to identify that individual in the future.

Now take that same facial recognition while the user is watching a comedy, and a tear jerking movie. We can establish a baseline for emotion for each individual response we want to associate. Utilizing heart rate, hand gestures, and the like once well trained a few quick images could reveal instantly who the user is.

Utilizing tools like Kinect and Leap motion adding in things like infrared and close images of the pupil and the face a great deal of information can be used to identify a user.

Imagine if you could watch a movie and the next time you do I can predict how you will react at each frame with a percent of certainty.

I am not suggesting that we understand merely the psyche of the user, but more about their innate responses and tendencies…these are not things that can easily be broken.

At least one thing we can take from this at a minimum is the ability to add in the “scared” factor, or rather unusual behavior we can protect many things. I want to use this to identify yourself and when I know it is you but you aren’t acting like yourself. Obviously certain traits will be more dominant than others.

We can take this just a layer on top of a standard multi-factor system that incorporates tests to help verify that the account holder is not under duress.

The completely other application for this is for convenience and AI facilitators. If we can get the pattern down to identify an account holder and then be able to detect variances in their behavior we can trigger different things in response to that. This goes well beyond security and much more into the realm of IoT and automation, but let’s explore it.

You come home and you walk in. Of course your car has pulled up and your home already knows that you are approaching with your Wifi connected phone. You are emitting your mac address and a public key alerting your house that you are approaching. Your door is unlocked with NFC automatically but really, Wifi with a unique signature ID can trigger that as well. You walk in and your home is already lit to your specifications and temperature control as well. Nest helps with some of this, as well as detecting ambient lights in conjunction with the room and the individuals involved. Depending on the activities different illuminations settings can be triggered. When a “reading” action is triggered lighting should accommodate your preference. Okay…I’m leading up to it…now when you get to your computer it is unlocked because you are using it. My vision of the ultimate in security and convenience is really one solution. Tracking your behavior, your adjustments, your actions, your reactions. Learning from them to better identify you and make your life more secure.

Your house knows it is you because it knows your stride, your face, your smile, and the way you hum. All of these sort of things that your girlfriend may pick up on can be incorporated into the ultimate system which help to “get to know you to protect you”.