• Uncertainty Wednesday: Weather - Climate September 20, 2017 11:42 am
    Last Uncertainty Wednesday I introduced how think about weather using the concepts we have developed in this series. We saw that our improved data collection and ability to process more complicated weather models has given us significantly improved predictions. We also learned that because of the chaotic nature of weather, despite our massive progress on short term forecasts we still do quite poorly for forecasts that go out further than a week.Now a key question that we asked throughout this series is what we can learn about the reality of weather from the signals that we observe. Of particular interest here is the question whether the observations should us make more less inclined to believe in climate change, a topic that I have also covered extensively here on Continuations.To get started on that we need to draw a distinction between weather and climate. The first sentence of the Wikipedia entry on climate provides an interesting approach here:Climate is the statistics of weather over long periods of time.The key words here is the “statistics” of weather. What is a statistic? It is a summary of observations, such as a minimum, or a maximum, or a total, or an average, or a variance.  The weather is the temperature, rainfall, humidity, etc. on a given date and time. There is past weather, current weather and forecasts of future weather. The climate would then be summaries of these observations over longer periods of time. These periods don’t need to be contiguous. For instance, you could take the minimum, maximum and average temperatures for the month of July in New York City using data from July for say the last 10 years (or the last 100 years). So the weather are the raw observations and the climate are statistics computed from those raw observations? So the climate is simply a summary of the weather? That seems, well, not a very useful definition of climate.The weakness of this definition is the result of a problem which I alluded to in my post about expected value, where I warned against confusing the sample mean with the expected value of the probability distribution.  A better definition of climate would be as follows:Climate is the probability distribution of possible weather events.The statistics of weather are supposed to help us understand what that probability distribution is.  With this definition, climate change becomes a shift in the probability distribution of weather events. And we begin to understand that inferring whether such a shift has indeed occurred is quite tricky. We have a bunch of hot years in a row. Is that just “dumb luck” (sort of like losing the coin toss at five matches in a row)? Or is it that the distribution has changed (the referee is tossing a biased coin)?  Next Uncertainty Wednesday we will dig deeper into the relation between the sample mean and expected value using the context of weather and climate. 
  • World After Capital: Progress Report September 18, 2017 1:30 pm
    My book World After Capital has been an ongoing project for a couple of years now. I am excited that as of this weekend I feel it includes all the ideas I want in there. Some of them, such as strategies for overcoming the dominance of nation states, are so far in a protozoic stage, but at least they are there. Others, such as the chapter on the sufficiency of “Capital“, are in need of a substantial rewrite because they have fallen out of sync with other parts (see the rewritten chapter on “Needs”).What is next? Rewriting the Capital chapter is high on my priority list. I am also now in need of a copy editor to go over the whole book from beginning to end. There are lots of inconsistencies in voice stemming from the iterative writing process. There are also ideas that could be expressed more clearly. And I also want to revisit some of the sequencing, although having explored many possible high level outlines I am pretty happy with the one I have now.Finally, I have been getting more inquiries for printed and bound versions of the book. So I am starting to look into options for that. I am particularly interested in Print on Demand solutions, as it seems crazy to me in the year 2017 to print copies without knowing the demand. On the other hand though I love a book that’s printed on quality paper in an attractive font and well bound. Ideally one can have both and not have the print on demand feel cheap and fall apart quickly. If you know an amazing copy editor who is also fundamentally interested in the topics of World After Capital, please send them my way. If you have ideas for high quality Print on Demand, I am interested in that also.PS If you know someone who is great with fonts, instead of a designed cover I want just text (and of course also for the book itself)
  • Uncertainty Wednesday: Weather (Intro) September 13, 2017 8:46 pm
    So far in Uncertainty Wednesday we have mostly built up concepts and ideas, with only one extended example.  Given the two massive hurricanes Harvey and Irma, the weather received a lot of attention, including the question to what extent the occurrence and/or severity of these storms lets us draw any conclusions about climate change. For that reason we will spend the next few Uncertainty Wednesdays looking at the weather using the ideas and concepts from the series. First, let’s put weather in the context of our framework, which consists of reality, explanations and observations. The reality in question is the complete state of the Earth’s atmosphere. The first thing to note here is that the atmosphere is not a closed system. It receives energy from outside (as a first approximation entirely from the sun, albeit much of that via heat radiated back by the Earth’s surface) and its mix of gases changes due to processes here on earth, such as photosynthesis and the burning of fossil fuels.Second, let’s not that our set of observations about the state of the atmosphere and of the energy providing and gas changing processes is small relative to the scale of the system. This stands in sharp contrast to the many smaller examples used throughout the series where the system had only two states and we have a signal with two values. Here we have a system with a great many possible states and we have a lot less signal (e.g., measurement of say air pressure). It is also important to note though that how much signal we have available, has dramatically increased over time through technology, such as satellite imaging.Third, weather is a classically deterministic system. Albeit one with explanations we do not fully understand. There as some explanations that are quite simple and well understood, such as air flowing from areas of high air pressure to low pressure causing wind. We also understand for instance how air above land and water heats up differentially also causing wind to form. But explanations for other phenomena, such as what goes into cloud formation and when clouds begin to rain, are less complete and less well understood.Fourth, weather is a chaotic system as described in the introduction to the framework. As we saw there, small differences in observations will lead us to large differences in predictions about the future, especially the further out the future is.Something that follows pretty much immediately from all of the above is: as we get more observations (and better computers for crunching them) our weather forecasts will get better. Here is a graph from an article that appeared in Nature which beautifully summarizes this:We can see that forecasts have gotten much better between 1981and 2015 with a 3 day forecast (blue lines at top) going from about 80% accuracy to about 97.5% accuracy. We also see that accuracy drops off dramatically as the forecasts go further out and even though we se a big improvement in 10 day forecasts (grey lines at bottom), they are still pretty bad and have leveled out around 40% accuracy. Something else the chart shows is the convergence between Northern and Southern Hemisphere forecasts. Whereas the overall improvement is the sum of both better observations and better models, the convergence is largely the result of much better southern hemisphere data in the satellite age.
  • 9/11 September 11, 2017 11:33 am
    Going to work today will feel odd because the weather reminds me so much of 2001. Gorgeous blue skies. Not quite summer anymore, but also not yet fall. The weather is one of my strong memories from 2001.The other memory concerns later in the day. We knew a horrific act of terrorism had been carried out that killed thousands of people. And yet on the Upper West Side where we were living a the time it didn’t feel entirely real as we saw much of it on television. I am experiencing that sensation too this morning as I am looking at pictures of destruction brought about by hurricane Irma (following on the heels of images of biblical flooding from Houston and the leveling of Mexican towns by an earthquake). It all feels so extreme and surreal and yet it is the grim reality for millions of people.So my thoughts are with all those who were affected by 9/11 and those whose lives are being upended right now.
  • ICOs and the Promise and Perils of a Global Capital Market for Everyone September 8, 2017 1:20 pm
    I have already written a bunch about ICOs here on Continuations, including earlier this week about the Chinese ban. Regulators are approaching this, not surprisingly, by focusing on their own country. While that’s understandable, it undermines one of the most promising aspects of using tokens as equity: a global capital market that’s readily accessible to everyone. If you are a wealthy investor or if you are a large corporation you already have access to a global capital market today. For example, a few years back I called my broker at Morgan Stanley and asked them to buy some Japanese robotics stocks for me that are only listed in Japan. Morgan Stanley took care of everything for me including figuring out how to buy Yen, place the trade, custody the securities and make the holdings show up in my account.  Or take Apple, a company that already sits on a huge cash pile. Nonetheless, the company has been issuing Euro, Yen and Canadian dollar denominated bonds for the last few years tapping into global capital markets.If you are a smaller company though or an individual investor with a smaller account you tend to be restricted to your local capital market. In many countries those local markets are relatively illiquid and/or suffer from other problems. In the US, for example, we have made it difficult for companies to go public through both overregulation and through a broken IPO process. Combined with ongoing M&A activity, here in the US the number of publicly listed companies has be declining substantially.Now the end goal probably shouldn’t be a global free for all capital market with scams everywhere you look and people losing their life savings overnight. But conversely I don’t think the right answer is trying to stuff everything back into country-by-country securities regulation, much of which dates back to pre-Internet days and doesn’t at all account for how much more data can be shared today by a company on an ongoing basis. For instance, the bulk of US securities regulation dates back to the 1930s and the aftermath of the stock market crash. So directionally what could be done? As a starting point companies themselves could embrace high standards of transparency and could limit how much exposure individual investors can buy. The latter could either a fixed number or be based on suitability tests (eg if you can prove you hold a lot of say BTC or ETH you can invest more). As a next step there could be a global self regulatory body. People sometimes rightly scoff at the notion of selfregulation but there are great examples of where this has worked well and better than government standards, such as Demeter (for organic food). There doesn’t need to be a single one of these but there can be multiple which allows for some degree of experimentation.One important question that comes up is how rights would be enforced in such a world. As a US shareholder in a US corporation, I have certain statutory rights. In practice though, the primary mechanism in the capital market is not voice but exit. If I don’t like what a company is doing my general recourse is to sell their shares. That mechanism also exists in the token world. The right approach to voice would be through the selfregulatory bodies I mentioned above, which could then be supported by traditional government regulators.All said then, I believe that there is an opportunity here for a new system to emerge and it would be a shame if we shoved it back into the existing boxes before we have given that a thorough shot.
  • DACA and the Immigration Policy Mess: Pass the DREAM Act September 6, 2017 11:27 am
    Since I am an immigrant to the United States I want to comment on the Trump administration’s announcement yesterday to end DACA  (Uncertainty Wednesday will resume next week). I first came to the States as an exchange student in 1983 to 1984, when I lived with a wonderful family in Rochester Minnesota. One of my lasting memories is how quickly one can fall in love with a country as a child or young adult. For the DREAMERs, who all came to the United States as children and have grown up here, the United States is their home. Threatening to take that away from them is cruel. Doubly so after giving them hope first.The blame for this situation though rests with Congress and past Presidents who have failed to make any meaningful progress on immigration reform. Right now, it is worth remembering now that the DREAM act has been around for 16 years. There have been multiple attempts to pass it with at varying times support in the House and Senate, but never the two at the same time, including a bipartisan filibuster in the House that included 8 Democrats. The opposition by Democrats often arose because they wanted comprehensive immigration reform or nothing.There is much more we need to fix about immigration, but passing the DREAM Act would be a good start. Everyone should be calling their representatives and senators and urge them to do this. It has broad public support. The key challenge will be not having it become a political football again with all sorts of crazy plans attached to it such as funding the border wall. That’s where pressure from citizens on their representatives will make a big difference, as well as senior leadership from both parties.Let’s get the DREAM Act passed and let it be the beginning of a return to bipartisan politics!
  • China, ICOs and Crypto Regulation September 5, 2017 1:01 pm
    As has been widely reported, China has banned all ICOs. Given the torrid pace of ICOs in China, many of which appear to be downright scams, this should not come as a surprise. What will be interesting to see though is what comes next. Here are some key questions.First, is this a temporary ban only? Will China come out with a set of regulations that allow some ICOs to go forward? Given that China has allowed crypto currencies per se, I expect that they will come up with regulation and that this ban was them pulling the emergency break.Second, how will regulators in other countries react? They might see this as an opportunity to follow suit, or they might see it as an opportunity to position themselves as more attractive to innovation. Given the cautious initial findings of the SEC around applying the Howey Test, I think they are unlikely to act rashly now.Third, regulation will be a long term issue for innovators and investors in the crypto currency field. Because these systems are global from day 1, countries will find that they have less regulatory power than in more traditional financial markets. So I expect that what’s legal where will shift around a bunch over time. That adds an important component of risk and uncertainty to innovating and investing in the blockchain space. How big is that extra risk and what to do about it? That’s one of the key issues to wrestle with.As we continue to invest in this space we will be paying close attention to this. So expect updates here in the coming months. 
  • Taking a Break August 18, 2017 12:53 pm
    This will be my last post until Labor Day. I will be spending as little time online as possible, reading books instead, spending time with family and friends and working on World After Capital.  I will be disabling all notifications on my phone and checking email only twice a day. Given all the craziness here in the US and elsewhere in the world, I have been spending too much time on news and I am looking forward to this break to dial things down. 
  • Uncertainty Wednesday: Risk Seeking (Jensen’s Inequality Cont’d) August 17, 2017 3:31 am
    Last Uncertainty Wednesday, we saw how diminishing marginal utility of wealth provides an explanation of risk aversion via Jensen’s inequality. Why would it be then that lots of people seem to like small gambles, like a game of poker among friends. One possible explanation is that the utility function is locally convex around your current endowment. So this would look something like the following:In the immediate area around the endowment (marked with dotted lines for two different levels) the utility function is convex, but for larger movements it is concave. In the convex area someone would be risk seeking. Why? Well because Jensen’s inequality now gives usU[EV(w)] ≤  EV[U(w)]Again, the left hand side is the utility of the expected value of the wealth, whereas the right hand side is the expected utility, meaning the expected value of the utility. Now the inequality says that someone would prefer an uncertain amount over a certain one. Here is a nice illustration from Wikipedia:We see clearly that the Certainty Equivalent (CE) is now larger than the expected value of wealth, meaning the Risk Premium (RP) works the other way: in order to make a risk seeker as well off as accepting the bet, you have to pay them more than the expected value.Next Uncertainty Wednesday we will look more at how incredibly powerful convexity is in the face of uncertainty. 
  • ICOs and Governance August 15, 2017 11:48 pm
    As has been widely reported, in the last few months ICOs have raised significantly more money for blockchain startups than has come from traditional venture investors. This can be seen as a sign of the long discussed unbundling of venture capital. The idea is that while VCs bundle capital, advice, governance and possibly services (e.g., help with recruiting), technology may make it possible to separate out these different functions. Today I want to focus on “governance” which is the least understood, and I believe most difficult to accomplish, of these functions.What is governance? The word, like “government,” comes from the Greek word for “to steer.” It stands for the decision making bodies and processes that steer a company, a protocol, or a country. Why is governance needed? Because we (a) cannot in advance specify the right course of action for all possible contingencies and (b) for many of those contingencies there is not a single, obviously optimal action to take. That’s often the case because actions will impact different constituents differently. The role of governance, of steering, is to help choose an action at those moments.What are examples of governance issues? In companies, these include events such as fundraising, considering an M&A offer, replacing a CEO. The need for governance in a company tends to be relatively limited because the CEO has a lot of discretionary power. In contrast, protocols completely describe the activities of participants and so almost any change to the protocol turns into a governance issue. People have written about governance once a protocol is up and running, including different mechanisms for triggering (or avoiding) forks. But the issue I want to focus on instead is the question of governance post ICO and pre-protocol launch. How should the allocation of funds be steered?For a lot of projects the answer appears to be that funds can be spent on whatever the founding team deems appropriate without any additional governance mechanism. This is akin to a company without a board or a board that’s controlled by the CEO. Other projects have set up foundations that control some of the ICO proceeds with an independent board as a governance mechanism.Governance will turn out to be important here because there are so many allocation decisions to be made (several projects have raised north of $100 million). And those decisions not only don’t have obviously right answers but also come with the potential for self dealing, starting with determining salaries for team members. Now VCs role in governance of companies is not perfect either. There are lots of potential conflicts of interest. For instance, investor board members may want to accept (or reject) an M&A offer that management wants to reject (or accept) because their economics are quite different. And there are of course plenty of examples of VCs not exercising their governance role.Nonetheless, the right answer is probably not doing away with governance altogether. I recommend to all the projects that have raised money through ICOs to put some kind of governance structure in place before spending a lot of the funds. It will be important for that mechanism to not just include investor and founder interests, but to also reflect community members, the people who are ultimately supposed to benefit from the protocol.  
  • Preparing for Superintelligence: Living the Values of Humanism Today August 12, 2017 5:45 pm
    In my draft book World After Capital, I write that humans having knowledge is what makes us distinctly human and gives us great power (and hence great responsibility). I define knowledge in this context as science, philosophy, art, music, etc. that’s recorded in a medium so that it can be shared across time and space. Such knowledge is at the heart of human progress, because it can be improved through the process of critical inquiry. We can fly in planes and feed seven billion people because we have knowledge.There is an important implication of this analysis though that I have so far not pursued in the book: if and when we have true General Artificial Intelligence we will have a new set of humans on this planet. I am calling them humans on purpose, because they will have access to the same power of knowledge that we do. The question is what they will do with knowledge, which has the potential to grow much faster than knowledge has to date. There is a great deal of fear about what a “Superintelligence” might do. The philosopher Nick Bostrom has written an entire book by that title and others including Elon Musk and Stephen Hawking are currently warning that the creation of a superintelligence could have catastrophic results. I don’t want to rehash all the arguments here about why a superintelligence might be difficult (impossible?) to contain and what its various failure modes might be. Instead I want to pursue a different line of inquiry: what would a future superintelligence learn about humanist values from our behavior?In World After Capital I write that the existence and power of knowledge provides an objective basis for Humanism. Humanism in turn has key value implications, such as the importance of sustaining the process of critical inquiry through which knowledge improves over time. Another key value implication is that humans are responsible for animals, not vice versa. We have knowledge and so it is our responsibility to help say dolphins as opposed to the other way round.To what degree are we living this value of responsibility today? We could do a lot better here. Our biggest failing with regard to animals is industrial meat production and as someone who eats meat, I am part of that problem. As with many other problems that human knowledge has created, I believe our best way forward is further innovation and I am excited about lab grown meat and meat substitutes. We have a long way to go in being responsible to other species in many other regards (e.g., pollution and outright destruction of many habitats). Doing better here is on important way we should be using the human attention that is freed up through automation.Even more important though is how we treat other humans. This has two components: how we treat each other today and how we treat the new humans when they arrive. As for how we treat each other today, we again have a long way to go. Much of what I propose in World After Capital is aimed at freeing humans to be able to discover and pursue their personal interests. We are a long way away from that. That also means constructing the Knowledge Age in a way that allows us to overcome, rather than re-enforce, our biological differences (see my post from last week on this topic). That will be a particularly important model for new humans (superintelligences), as they will not have our biological constraints. Put differently, discrimination on the basis of biological difference would be a terrible thing for super intelligent machines to learn from us.Finally, what about the arrival of the new humans. How will we treat them? The video of a robot being mistreated by Boston Dynamics is not a good start here. This is a difficult topic because it sounds so preposterous. Should machines have human rights? Well if the machines are humans then clearly yes. And my approach to what makes humans distinctly human would apply to artificial general intelligence. Does a general artificial intelligence have to be human in other ways as well in order to qualify? For instance, does it need to have emotions? I would argue no, because we vary widely in how we handle emotions, including conditions such as psychopathy. Since these new humans will likely share very little, if any, of our biological hardware, there is no reason to expect that their emotions should be similar to ours (or that they should have a need for emotions altogether).This is an area in which a lot more thinking is required. We don’t have a great way of discerning when we might have built a general artificial intelligence. The best known attempt here is the Turing Test for which people have proposed a number of improvements over the years. This is an incredibly important area for further work, as we charge ahead with artificial intelligence. We would not want to accidentally create, not recognize and then mistreat a large class of new humans. They and their descendants might not take kindly to that.As we work on this new challenge, we have a long way to go in how we treat other species and other humans. Applying digital technology smartly gives us the possibility of doing so. That’s why I continue to plug away on World After Capital.
  • Uncertainty Wednesday: Risk Aversion (Jensen’s Inequality Cont’d) August 9, 2017 10:43 pm
    Last Uncertainty Wednesday, I introduced Jensen’s Inequality. I mentioned briefly that it explains a lot of things and today we will look at the first one of these, which goes by the name of risk aversion. This is simply economists way of saying that most people prefer a smaller guaranteed payment over a large but uncertain one. We will now see that this follows directly from diminishing marginal utility of money via Jensen’s inequality.So what is this “diminishing marginal utility” of money? Well it is generally assumed that the more money you make, the less an additional say 100 dollars will mean to you. This seems, for most people anyhow, a pretty safe assumption. If you are currently making $1,000 per month then getting an extra $100 per month let’s you have a lot more benefit than if you are already making $10,000 per month. But of course you are still somewhat better off making $10,100 per month.Putting that together would suggest a function that’s increasing but at a decreasing rate and that’s exactly a concave function. Since we are talking about utility here, we will use U(w) to denote this with w standing for wage or wealth. Then U(w) being concave immediately get us the following from Jensen’s inequality:U[EV(w)] ≥ EV[U(w)]The left hand side is the utility of the expected value of the wage, whereas the right hand side is the so-called expected utility. So anyone with diminishing marginal utility will prefer say $1,000 per month guaranteed over the possibility of $1,100 per month with 50% probability and $900 per month with 50% probability (expected value also $1,000). That is known as risk aversion. The following image from Wikipedia nicely illustrates the situation:In the image we can also graphically see two values: the so-called certainty equivalent and the risk premium. The certainty equivalent (CE) is the amount that would make the person indifferent between the risky payoff and the certain payoff. We see that the certainty equivalent is less than the expected value. Meaning in the example above, someone with risk aversion would in fact accept less then $1,000 (the expected value) with certainty and still feel as good as having the uncertainty of $1,100 with 50% and $900 with 50%. The difference between the certainty equivalent and the expected value is known as the risk premium (RP). That is the amount someone would be willing to pay to not to face the uncertainty.So if you are currently making $1,000 per month and your employer says that next month if the company does well you will make $1,100 but if it does poorly you will make $900, then a risk averse individual would be willing to pay some money, say $20 to make $1,000 with certainty (which after paying the risk premium will be $980). If you read my series on insurance fundamentals you will recall that this is the basis for the existence of insurance.Next Wednesday we will talk about risk seeking and get into the ideas of convex tinkering and antifragility.   
  • The Fallacy of Biological Determinism August 6, 2017 9:15 pm
    In my draft book World After Capital, I write about how digital technology has given us the possibility to leave the Industrial Age behind and enter the Knowledge Age. In an early chapter on Optimism, I argue against economic, historical and technological determinism. These are all theories in which an external force determines the shape of society, instead of the decisions made by us humans under the guidance of a set of values.The memo written by a Google employee, is a good reason to add “biological determinism” to this list of false determinisms. Biological determinism argues that certain features of society are the necessary result of some underlying biological process. From there, biological determinism often goes on to argue against efforts to change society with sometimes outright and sometimes veiled claims that such a change effectively goes against (human) nature.Here is the outline of the post. First, there absolutely are biological differences among humans resulting from our DNA and hence influenced by inheritance and these include our brains. Second, biological differences used to matter a more during the Agrarian Age and somewhat during the Industrial Age (even though they were not determinative even then). Third, with the possibility of entering the Knowledge Age, biological differences can be made irrelevant due to technological progress. We know that the development of our bodies is influenced by our genetic inheritance. For instance, how tall someone will grow is in part affected by how tall their parents are. The body of course includes the brain and so it would be strange to assume that our cognitive or emotional processes are completely untouched by genetics. I was born a “Lefty,” as in, I liked to pick things up with my left hand. This is a clear and hopefully non-controversial example of a cognitive process with known genetic influence (albeit not super well understood, as it is likely polygenic). Trying to argue away the historic existence of genetic differences goes against science. What we need to focus on is how such differences mattered in the past and, even more importantly, how much they will and should matter in the future.During the Agrarian Age and even much of the Industrial Age, our technological capabilities were quite limited compared to today. As a result certain tasks, like lifting a heavy object, often required physical strength (“often,” because we had really awesome early technology for lifting, such as pulleys, but unlike today, they were not widely available). On average, males were able to develop more physical strength. Many societies therefore favored males for carrying out these tasks. But even then there was nothing deterministic about it as not every society had exactly the same division of labor or developed the same tools. Many tools, as it turns out, were designed for right handed people (who make up about 90% of the population). The influence of right handedness on design persisted for a long time, such as most cars having the ignition lock to the right of the steering column (with Porsche as a famous exception). Handwriting a left-to-right language in ink, which was still a common technology when I first got to school, also favors right handedness: try writing with your left hand and not smudging the fresh ink. Handedness gives us a glimpse as to why technology often erases biological differences. At age 16, I learned to write on a  typewriter and all of a sudden being left handed made no difference (there is actually more to the story as we will see in a bit).Technology gives us the potential to make biological differences irrelevant. It does so in two ways: by letting us augment (or supplant) humans with machines and by allowing us to modify ourselves. For instance, physical strength is largely irrelevant already today and will become even more so in the future with robots, exo-skeletons, and advanced light weight materials. I just gave the example of the typewriter, but early typewriters required you to manually advance the paper as part of the “carriage return,” which was operated with the right hand. By the time I was 16 though, IBM had a really cool electric typewriter called the Selectric, which let you just hit a key and that was it. Another fun technological improvement: many modern cars no longer have an ignition lock, but just a button which is easy to press, even for someone left handed (unlike trying to get key into the ignition lock). And here is yet another automotive example of how technology can be used to make cognitive differences irrelevant: some people found it easier to learn how to read a map than others. Well, now we have turn-by-turn directions. But there is more to cognitive differences and the fallacy of biological determinism. Biological determinists like to trot out IQ results. Here too though they suffer a confusion between what is currently measured as a result of the past and what is possible in the future. We have learned a great deal in recent years about the amazing degree to which the brain can grow new connections (even in adults). The brain is highly (re)programmable. And here is where the rest of my handedness story comes in, which I skipped earlier: I actually learned how write with my right hand. Sure, it took more effort and my handwriting was awful at first compared to other kids, but over a couple of years the difference went away. There are many examples of people who were at first told they couldn’t learn something only to become experts at it. I highly recommend Grit by Angela Duckworth, which in addition to great anecdotes also provides lots of statistical evidence on how much can be learned given enough time (and deliberate practice).   We won’t know for quite some time what people will be able to learn in a world in which we can give everyone access to all the world’s knowledge. That is not the world we lived in until quite recently; where you were born and what your parents were able to afford had a huge impact on what you could learn. The idea though that IQ tests are a good measure of what any one person could learn with enough time and focus is in direct contradiction to what we know about the brain and what we have already observed in individuals. Historical statistics about IQ and race or gender are useless for normative purposes. They measure the past and ignore the potential offered by technological progress. Let’s suppose though for a moment that eventually we figure out that there is a meaningful degree of genetic difference in neuroplasticity. Why would we then assume that this is not something we could and should overcome with technology?  Now if you happen to think that my handedness example throughout is making light of the matter, consider this: at one point being left handed was considered to have been “touched by the devil.” This is reflected in etymology by the Latin “sinister” meaning both left and evil. We have come a long way on handedness since. It is time to do the same with other forms of biological determinism, including what we can and cannot study, what roles we can and cannot have in society, and whom we can and cannot love. We should be actively building towards that future today, including working on increased diversity.AddendumAfter I wrote this post, I read the Slate Star Codex piece. While it is positioned as a defense of the Google Memo, it is actually making arguments about biological differences in interest selection, rather than in potential. Here too the logic of knowledge is more powerful than either biology or society. Yes, we absolutely have evidence for biological factors influencing interests (see above). Similarly, of course, we also have evidence for social and cultural factors playing a role in interest selection. Particularly relevant to computer science here is the advent of personal computers in the 80s and how those were heavily positioned towards young males. This is likely to have been a factor in the change in CS enrollment patterns in college, as more males arrived with prior knowledge than females.Critically though, because we understand all of this rationally, we are not slaves to a pattern. Neither biology, nor existing society, has to remain determinative in interest selection. Instead we get to make choices. This is the beauty and power of knowledge! In many fields we have already intentionally chosen to give everyone broad exposure early on, so people can discover and develop an interest. We should do the same with computers (a great initiative in that regard is my partner Fred’s work on bringing computer science to high schools in New York City). As importantly, we need to revamp the overall education system, including higher education, so that people who get a later start on computers, or any other subject for that matter, can still develop their full potential. Thankfully, technological progress makes that possible (e.g. through online learning), but our institutions are lagging behind substantially. Changing our institutions requires us to want that change. We have to want to get to the Knowledge Age, it won’t get here by itself.
  • Uncertainty Wednesday: Jensen’s Inequality August 2, 2017 9:26 pm
    Last week in Uncertainty Wednesday, I introduced functions of random variables as the third level in measuring uncertainty. Today I will introduce a beautiful result known as Jensen’s inequality. Let me start by stating the inequality:f[EV(X)] ≤ EV[f(X)] where f is a convex functionIn words, if we apply a convex function to the expected value of a random variable, then we get a lower value than if we take the expected value of the same function of the random variable. This turns out to be an extremely powerful result.Jensen’s inequality explains, among other things, the existence of risk seeking and risk aversion (via the curvature of the utility function), why options have value and how we should structure (corporate) research. I will go into detail on these in future Uncertainty Wednesdays. Today, I want to show this wonderful picture from Wikipedia, which gives a visual intuition for the result:And before we get into applications and implications of the inequality, I should mention for completeness that the inverse holds for concave functions, meaningg[EV(X)] ≥ EV[g(X)] where g is a concave functionNext Wednesday we will look at utility functions and risk seeking / risk aversion as explained by these inequalities. 
  • VPNs and Informational Freedom July 31, 2017 9:37 am
    In my draft book “World After Capital” I have a section on the need for increased “Informational Freedom.” There I write:By design, the Internet does not embody a concept of geographic regions. Most fundamentally, it constitutes a way to connect networks with one another (hence the name “Internet” or network between networks). Since the Internet works at global scale, it follows that any geographic restrictions that exist have been added in, often at great cost.As well as:The same additional equipment used by governments to re-impose geographic boundaries on the Internet is also used by ISPs to extract additional economic value from customers, in the process distorting knowledge access. These practices include paid prioritization and zero rating.Virtual Private Networks (or VPNs) are a way for citizens to circumvent these artificial restrictions imposed by governments and by ISPs. That’s why it is dismaying to now see a movement to ban VPNs around the world. China just got Apple to remove VPN apps from the Chinese App Store. And Russia is banning the use of VPNs altogether starting in November of this year.If we want to preserve humanity’s ability to connect freely with each other, we need to respond in many different ways. Here are just some ideas that seem important 1. Political action to fight against bans on VPNs, making sure they remain legal in as many countries as possible  2. Making VPNs broadly available through easy to use applications that can find mass market adoption3. Supporting open phone operating systems, such as UBports, so people can easily run any software they want on their phones  4. Incentivized systems for providing global traffic routing on existing networks (stay tuned for something from Meshlabs)5. Incentivized systems for wireless mesh networkingBy incentivized systems here I mean something akin to the blockchain where contributors (miners) can earn a crypto currency in return for providing infrastructure.If you know of other projects / initiatives in these areas, I would love to learn about them.
  • Putting People First on Healthcare July 28, 2017 7:35 pm
    There is lots wrong with the healthcare and health insurance system in the US. One can also have a rational debate of the pros and cons of the Affordable Care Act and how we might proceed from here. What should not happen, however, is pushing through poorly thought through measures just for the sake of making a change. Even more so, when there has been ample of time to come up with a something well designed.So I was glad to see that three GOP senators voted against the latest half baked attempt at undoing the ACA. Particularly commendable was the opposition by Senators Lisa Murkowski and Susan Collins who bore the brunt of the pressure from their party and from the President. John McCain also finally found the courage to cast a “No” vote.It will be interesting to see what happens next. It would be great if the Republicans and Democrats could work together to improve the ACA, or propose some actually well-thought-out alternative. Instead, I fear that partisan politics will continue to dominate with every attempt made to have ACA fail for a cheap “I told you so” moment. For the sake of all of those depending on it, I hope I am wrong about that.
  • Uncertainty Wednesday: Functions of Random Variables July 27, 2017 2:59 am
    Just as a reminder, we have been spending the last few weeks of Uncertainty Wednesday exploring different measures of uncertainty. We first looked at entropy which is a measure based only on the states of the probability distributions itself. We then encountered random variables, which associate values or “payouts” with states and learned about their expected value and variance (including continuous random variables). Today we will look at functions of random variables. We will assume that we have a random variable X and we are interested in looking at the properties of f(X) for some function f. Now you might say, gee, isn’t that just another random variable Y? And so why would there be anything new to learn here? To motivate why we we want to explore this, let’s go back to the post in which I introduced the different levels at which we can measure uncertainty.  There I wrote:Payouts are only the immediate outcomes. The value or impact of these payouts may be different for different people. What do I mean by this? Suppose that we look at a situation where you can either win $1 million with 60% probability or lose $10 thousand with 40% probability. This seems like a no brainer situation. But for some people losing $10 thousand would be a rounding error on their wealth, whereas for others it would mean becoming homeless and destitute.We now have the language to analyze the uncertainty in this. First we can compute the entropyH(X) = - [0.6 * log 0.6 + 0.4 * log 0.4] = 0.971We can also calculate the expected value and variance as follows:EV(X) = 0.6 * 1,000,000 + 0.4 * (- 10,000) = 596,000VAR(X) = 0.6 (1,000,000 - EV(X))^2 + 0.4 (-10,000 - EV(X))^2 = 244,824,000,000 But as the text makes clear, none of these capture the vastly different impact these payoffs might have for different people.One way to do that is to introduce the idea of a utility function U which translates payoffs into how a person feels or experiences these payoffs. Consider the following utility functionU(X) = log (IE + X)where IE is the initial endowment, meaning the wealth someone has before encountering this uncertainty. The uncertainty faced by someone with IE = 10,000 is dramatically different than for someone with IE = 1,000,000. In fact for IE = 10,000 when the payoff is -10,000, the utility function goes to negative infinity (char produced with Desmos; technically you’d have to consider a limit, but you get the idea).So we can see that applying a function to a random variable can have dramatic effects on uncertainty. Next week will dig deeper into what we can know about the impact of applying a function. In particular we will be interested in questions such as how does EV[U(X)] relate to U[EV(X)] — meaning what can we say about taking the expected value of the function of the random variable versus plugging the expected value into the function?
  • Attention on Digital Monopolies July 25, 2017 12:10 pm
    The dominant position of companies such as Google, Facebook, Amazon is sure receiving a lot more attention these days. There is critical media coverage, including in traditionally pro business publications such as the Wall Street Journal “Can the Tech Giants Be Stopped?” and Bloomberg “Should America’s Tech Giants Be Broken Up?”  There is also the Democratic Party’s “Better Deal” memo which focuses more broadly on the negative effects of corporate power. And then of course there is the European Union, which already fined Google 2.4 Billion Euros for manipulating search results and is considering another fine for Google’s alleged forced bundling of Google services with Android.While I am happy to see the attention on the issue, I am concerned that regulators are missing the fundamental source of monopoly power in the digital world: network effects arising from the control of data. This will continue to lead to power law size distributions in which the number 1 in a market has a dominant position and is many times bigger than the number 2. That dynamic will play itself out not just for the very large companies which regulators are starting to look at but will be true in lots of other markets as well. The only way to go up against this effect is to shift computational power to the network participants.I first started to write about this approach nearly three years ago in a post titled “Right to an API Key.” I then expanded this idea into what I am calling the “right to be represented by a bot” – as an enduser I should be able to have all my interactions with digital systems intermediated by software that I control 100%. You can watch my TEDx talk about this and also read more about it in the Informational Freedom section of my book World After Capital. Unfortunately instead of looking for this kind of digitally native solution, regulators are largely reverting to the industrial age tool of antitrust regulation. As a result I have a feeling we will be stuck with network effects based digital monopolies for quite some time, despite the exciting work that is happening around decentralized blockchain-based systems.
  • Homo Deus by Yuval Harari (Book Review) July 21, 2017 3:12 pm
    I previously wrote a review of Yuval Harari’s Sapiens, which I highly recommended, despite fundamentally disagreeing with one of its central arguments. Unfortunately, I cannot say the same about Homo Deus. While the book asks incredibly important questions about the future of humanity, it not only comes up short on answers, but, more disappointingly, it presents caricature versions of other philosophical positions. I nonetheless finished Homo Deus because it is highly relevant to my own writing in World After Capital. Based on some fairly positive reviews, I expected a profound insight until the end, but it never came.One of the big recurring questions in Homo Deus is why we, Homo Sapiens, think ourselves to be the measure of all things, putting our own interests above those of all other species. Harari blames this on what he calls the “religion of humanism” which he argues has come to dominate all other religions. There are profound problems both with how he asks this question and with his characterization of Humanism. Let’s start with the question itself. In many parts of the book, Harari phrases and rephrases this question in a way that implies humanity is being selfish, or speciest (or speciesist, as some spell it).  For instance, he clearly has strong views about the pain inflicted on animals in industrial meat production. While it is entirely fine to hold such a view (which I happen to share), it is not good for a philosophical or historical book to let it guide the inquiry. Let me provide an alternative way to frame the question. On airplanes the instructions are to put the oxygen mask on yourself first, before helping others. Why is that? Because you cannot help others if you are incapacitated due to a lack of oxygen. Similarly, humanity putting itself first, does not automatically have to be something morally bad. We need to take care of humanity’s needs, if we want to be able to assist other species (unless you want to make an argument that we should perish). That is not the same as arguing that all of humanity’s wants should come first. The confusion between needs and wants is not at all mentioned in Homo Deus but is an important theme n the wonderful “How Much is Enough” by Edward and Robert Skidelsky and in my book “World After Capital.”Now let’s consider Harari’s approach to Humanism. For someone who is clearly steeped in history, Harari’s definition of Humanism confounds Enlightenment ideas with those arising from Romanticism. For instance, he repeatedly cites Rousseau as being a key influencer on “Humanism” (putting it in quotes to indicate that this is Harari’s definition of it), but Rousseau was central to the romanticist counter movement to the Enlightenment, as championed by Voltaire. If you want an example of a devastating critique, read Voltaire’s response to Rousseau.   One might excuse this commingling as a historical shorthand, seeing how Romanticism quickly followed the Enlightenment (Rousseau and Voltaire were contemporaries) and how much of today’s culture is influenced by romantic ideas. Harari makes a big point of the latter, frequently criticizing the indulgence in “feelings” that permeates so much of popular culture and has also invaded politics and even some of modern science. But this is a grave mistake as it erases a 200 year history of secular enlightenment-style humanist thinking that does not at all give a primacy to feelings. Harari pretends that we have all followed Rousseau, when many of us are in the footsteps of Voltaire.This is especially problematic, as there has never been a more important time to restore Humanism, for the very reasons of dramatic technological progress that motivate Harari’s book. Progress in artificial intelligence and in genomics make it paramount that we understand what it means to be human before taking steps to what could be a post human or trans human future. This is a central theme of my book “World After Capital” and I provide a view of Humanism that is rooted in the existence and power of human knowledge. Rather than restate the arguments here, I encourage you to read the book.Harari then goes on to argue how progress in AI and genetics will undermine the foundations of “Humanism,” thus making room for new “religions” of trans humanism and “Dataism” (which may be a Harari coinage). These occupy the last part of the book and again Harari engages with caricature versions of the positions, which he sets up based on the most extreme thinkers in each camp. While I am not a fan of some of these positions, which I believe run counter to some critical values of the kind of Humanism we should pursue, their treatment by Harari robs them of any intellectual depth. I won’t spend time here on these, other than to call out a particularly egregious section on Aaron Swartz whom Harari refers to as the “first martyr” for Dataism. This is a gross mis-treatment of Aaron’s motivations and actions.There are other points where I have deep disagreements with Harari, including the existence of Free Will. Harari’s position, there is no free will, feels like it is inspired by Sam Harris in its absolutism. You can read my own take. I won’t detail all of these other disagreements now as they are less important than the foundational mis-representation of what Humanism has been historically and the ignorance of what it can be going forward.  
  • Uncertainty Wednesday: Continuous Random Variables (Cont’d) July 19, 2017 11:37 am
    Last time in Uncertainty Wednesdays, I introduced continuous random variables and gave an example of a bunch of random variables following a Normal Distribution.Now in the picture you can see two values, denoted as μ and σ^2, for the different colored probability density functions. These are the two parameters that completely define a normally distributed random variable: μ is the Expected Value and σ^2 is the Variance.This is incredibly important to understand. All normally distributed random variables only have 2 free parameters. What do I mean by “free” parameters? We will give this more precision over time, but basically for now think of it as follows: a given Expected Value and Variance completely define a normally distributed Random Variable. So even though these random variables can take on an infinity of values, the probability distribution across these values is very tightly constrained.Contrast this with a discrete random variable X with four possible values x1, x2, x3 and x4. Here the probability distribution p1, p2, p3, p4 has the constraint that p1 + p2 + p3 + p4 = 1 where pi = Prob(X = xi). That means there a 3 degrees of freedom because the fourth probability is determined by the first 2. Still that is one more degree of freedom than for the Normal Distribution, despite having only four possible outcomes (instead of an infinity).Why does this matter? Assuming that something is normally distributed provides a super tight constraint. This should remind you of the discussion we had around independence.  There we saw that assuming independence is actually a very strong assumption. Similarly, assuming that something is normally distributed is a strong constraint because it means there are only two free parameters characterizing the entire probability distribution.