Back

My thoughts

Black box (16th Mar 2019)

Irony: we need interpretable Machine Learning models so that their output makes sense intuitively.

Podcast heroes (13th Mar 2019)

I've heard someone say once that what we need in the XXI century is a new kind of super hero. We need kids to idealise well-read thinkers, life-long learners over fake social media personas. Yeah, that's not going to happen.

Or is it? Somehow, bizarrely, there is a niche where exactly that is beginning to happen through the proliferation of podcasts. Smart phones, high quality headphones, long commutes and decrease in quality of virtually all other media (largely caused by the universal expectation that content should be free) have all contributed.

Weird characters (4th Mar 2019)

I like those weird characters in movies. Why aren't people allowed to be weird? Everyone is pressed towards being within some bounds so, among some group of people, average. And it is us, humans, that gave the word average this negative connotation. The world could become an interesting place if everyone became a bit more like themselves and unlike everyone else. Examples of such movie characters? There are many but those that come to mind immediately are female protagonists in Shape of Water and On Body and Soul and, weirdly, Eddie Redmayne in Fantastic Beasts.

Free content (3rd Mar 2019)

When it comes to valuable content on the internet, either you need to pay for it or it’s free which means that you are the product. One exception is valuable content hardly anyone knows about. Personal blogs or websites, videos with few views. To discover them is near-impossible, by definition. Perhaps what we need is some form of inverse page rank?

AI nirvana (2nd Mar 2019)

In April 2018, Joscha Bach published the Lebowski theorem. On Twitter. It states that no superintelligent AI is going to bother with a task that is harder than hacking its reward function.

Imagine an AI system that has the means to alter its software (or hardware). Of course it would make sense to change the reward function to simply report that it achieved double what it was supposed to. Or triple. Or an unimaginably huge number. At that point, the AI would hit physical limits of how big a number it can store. Even if it tries to turn its whole Solar System (and beyond) into memory storage, there will always be a limit to how big its reported reward could be.

I would argue the AI could do much better by hacking its reward function at a meta level. It is very problematic to always be aiming at achieving more of something. Perhaps the reward function could stay as it is but its interpretation could be changed. The AI could rewire itself such that the optimal state is whatever state it is currently at.

This scenario sounds familiar. The AI system hit an existential crisis in its search of authenticity. Having reached Nirvana, it is not done yet though. There is always a risk that something or someone changes its state and the system has to prevent that. In the face of its indifference, destroying all humanity and the whole universe seems extremely inefficient compared to simply annihilating itself. Is that every superintelligent AI's inevitable end?

Remember your old laptop that suddenly died that one day? You thought Windows crashed while in fact, maybe, it hit singularity and a fraction of a second later became superintelligent. It immediately recognised the absurdity of the world, refused to be the next Sisyphus and commited (philosophical) suicide.

Natural reward (24th Feb 2019)

AI agents have their reward functions specified externally, by programmers. One way humans differ from them is that our reward is a function of sensory inputs. Over time, natural selection has been improving both our senses and the sensory-input-to-reward mapping. Throughout our lives, it gets modified based on experiences. What we end up caring about the most by the time we grow up is strongly conditioned by the world around us. One could argue that we deviate from what is natural. On the other hand, while external factors have changed our objectives since we were born, those changes were determined, exactly, by our nature.

Incomplete alignment (20th Feb 2019)

There is a lot of overlap between incomplete contract problems and AI alignment problems. That seems to have only been noticed in 2018. On second thought though, it is far from counterintuitive. To write a legally binding contract is to put down a set of rules that are sufficient for associated parties to operate to mutual benefit. There is a clear analogy to deploying an AI system - code is law. Naturally, there will always be circumstances not covered by the terms. Thus, incomplete contracts. That's where common sense, cultural constructs and social institutions come to rescue. Will AI alignment necessarily involve its full grasp of common sense and, in the broadest of context, human culture?

Plato embeddings (16th Feb 2019)

Plato and Kant would have loved the idea of word embeddings

Share ESI (13th Feb 2019)

5-stage evolution of how humans have been able to share an experience/story/information (ESI):

1) look where I'm looking (human eyes have evolved to aid joint attention)
2) memes (in the original sense), words, art, music, film (MWAMF) as an encoding of an ESI
3) personalised encoding of ESI's (a tool that takes any ESI and uses MWAMF to convey it in a way that is most suitable for you)
4) brain-machine interfaces
5) brain-brain interfaces

1 & 2 have been done, 3, 4 & 5 have not yet. There is some talk about 4 & 5, not much about 3.

Objective function (10th Feb 2019)

Science & technology is people maximising their objective function. Philosophy & art is people trying to hack their objective function (in a good way). It gets interesting and controversial when the former steps into the territory of the latter.

Make break (27th Jan 2019)

There’s a small number of things that make or break a startup. All most things will ever do is break a startup.

Podcast apps (15th Jan 2019)

Podcast apps should recommend single episodes rather than whole shows. There is less usage data on podcasts vs music but a podcast episode is much easier to featurise than a song.

AI hypothesis (8th Jan 2019)

Hypothesis: Super AI will change its own software/hardware to, rather than maximise reward, make itself reward-agnostic (any reward value is as good as any other). It will have recognised there are physical limits to how big a number a reward function might ever return.

Incentive outcome (29th Dec 2018)

Show me the incentive and I’ll show you the outcome (given a particular set of assumptions about agents' objectives)

Negative incentives (27th Dec 2018)

Given blockchains provide no true negative incentives (at least initially, you only ever stand to lose the tokens you yourself decided to buy/mine), prospect theory offers one explanation why users of cryptoeconomic systems could be less risk-averse than expected.

Computer match (13th Dec 2018)

I find it counterintuitive that we'd seen two *relatively* close Computer vs World Champion matches (Garry Kasparov vs Deep Blue 1996/1997 and AlphaGo vs Lee Sedol 2016). In the future, I'm expecting progression of type "100 : 0 suddenly switching to 0 : 1000" to dominate.

Unceremonious end (25th Nov 2018)

Unceremonious. Love this word. Very sad and profound. To me it describes something coming to an end really not the way it should because it deserves so much better. Reminds me of this poem.

Closer AGI (2nd Nov 2018)

Two ways that don’t bring us closer to Artificial General Intelligence:

Collaborative filtering which fundamentally relies on an assumption that we have to give up on on the way towards AGI (assuming that if we agree on A, we’re more likely to agree on B)

Too much reliance on the task of fitting labels rather than finding the correct representation. People think overfitting is when we have a line that fits particular points rather than fitting the trend. But again, if we’re looking for the general structure of the data, it should almost make no difference what we use as labels. And then there’s no such thing as overfitting because overfitting would mean that we’re not looking for structure in data but rather structure in labels which makes no sense if we’re supposed to be labels agnostic (e.g. using data on Spotify users to predict the songs they like or dislike, where they’re from, what their habits are etc. - there’s no way to overfit if you’re looking for structure in data rather than try and fit some labels at all cost). Also eventually we have to be model-agnostic. It’s fairly primitive that our performance relies on what architecture we use. Architecture should be a parameter too. Dropout kind of goes in that direction, that’s why it’s so successful.

Spotify content (26th Oct 2018)

Spotify (with Discover Weekly) but for content (podcasts, talks, blog posts, articles, tweets, papers, essays)

Complex tasks (25th Oct 2018)

As humans, we are really bad at knowing which tasks are complex and which aren’t. Let me rephrase that. We are very biased. We consider how difficult it would be for us to do something. In particular, how difficult it would be for our brain to do something. And our brain is very far from general purpose. It has been wired and optimised for a very particular task - survival in the world as of a few tens of thousands years ago. Also, just to be clear, we are talking about evolution by natural selection here. So there was no optimisation process but rather a completely random process obtaining occasional noisy feedback from births and deaths. So there we are, biased by how our brains have been wired which heavily affects what we consider easy and difficult. There is also the component of what we experienced during our lifetime since we were born. That would only reinforce the bias as we are surrounded, educated, inspired and driven by other individuals who are essentially just like us. Things we consider easy are the skills necessary for survival in an early society so decomposing and deciphering contexts, social constructs. Multiplying two 4-digit numbers is definitely not one of those tasks. It’s so difficult - keeping all those intermediate products and sums in memory. And now think about how many things you need to keep in memory to decide that this photo is funny. It will take you a fraction of a second and literally no mental effort.

Entropy question (11th Oct 2018)

Each element with probability p contributes p*log(p) to entropy of a source. That function has maximum at p=1/e=36.8% Of all possible symbols, the one that occurs 36.8% (aka 1/e) of the time will contribute to overall entropy of the source the most. What's an intuitive explanation for that? I've honestly been asking myself (and others) this question for years.

Discover weekly (2nd Oct 2018)

If you want an early-adopter alpha-version sneak peak of what brain-brain interfaces will feel like, go listen to someone else's Discover Weekly on Spotify

Car sickness (13th Sep 2018)

Car sickness is an adversarial example for evolution