Starting today, October 4, I will be coding a little bit every day on topics that I enjoy.

Day 1 (Oct 4)

Tutorial

Today, I started following this tutorial. I did a bit of graphics programming in university but I still find it fun and thus this was one of the first tutorials I wanted to try.

With ChatGPT's help, here are the things I learned so far:

  1. Legacy OpenGL and modern OpenGL are different: the legacy one you tell OpenGL specifically the effects you want to put on the objects and in the modern one you apply shaders sequentially yourself. The modern way is harder to understand but more powerful (probably).
  2. The objects supplied to the renderer consist of shape vertices and a normal vector for where the plane is facing.
  3. Culling refers to not rendering a side of the face, usually the back side. This is for efficiency reasons. There are times you don't want to cull: for example, some effects, like transparency effects, rely on both sides to be rendered.
  4. OpenGL calls act on a global OpenGL object, not a buffer like in Vulkan. Buffers again give more power to the programmer at the cost of increased complexity.
  5. GL_DEPTH_TEST allows OpenGL to track depth values of objects and display objects in front of others. This is necessary for 3D, except when objects are transaparent.

Day 2 (Oct 5)

Going to read a little bit of this: Building a torrent client from scratch

Here's what I learned (or am learning):

  1. Unlike what I assumed, torrent clients need a main central http server to get a list of peers. I thought it was fully decentralized but no. Because of this, if a torrent site goes down, I believe all the peers in those torrents are lost, which sounds like a really big issue. However, the author mentions magnet links and other actually decentralized ways of discovering peers. So they got this problem figured out.
  2. Torrent file sizes are really small (256kb-1mb) and are secured with hash, which is a great system, since it makes torrents unhackable, unless the main server which gives you the torrent file is hacked, of course. If the main server gives incorrect hashes AND gives you a list of hacked peers, then it's the end. But if only of these problems exist, your peer will refuse to torrent.
  3. Torrents are stored in bencode, which is a very opinionated choice because it looks hard to read.
  4. User IP addresses AND ports are shown publicly. That seems kinda scary, especially ports. Wonder if malware-ridden torrent clients can be used to hack into computers.
  5. Peers don't actually request the pieces mentioned above, they request blocks, which are a range of the bits in a piece. Multiple blocks are requested at once because network round trips are super expensive. If instead you were to ask for the blocks once by one, your client would be more than an order of magnitude slower.

Day 3 (Oct 6)

Read a little bit of various tutorials.

  1. I learned that discord bots must register their slash commands through Rest API.
  2. I found a league client realtime game API, which is quite exciting. Wonder why no one has built an AI bot?
  3. Not much else just looked through various tutorials.

Day 4 (Oct 7)

Looked at WASP: WASP.

Day 5 (Oct 21)

I went all over the place.

First, I looked at Voyager, which is an LLM-powered Minecaft AI. It uses ChatGPT's Minecraft internet knowledge to create tasks and assessments for a Minecraft player.
Initially, I was disappointed because an AI using internet knowledge will only be as good as the humans that wrote that internet knowledge in the first place. However, upon thinking about it more, I think learning from humans is the only way an AI can learn to solve real-world problems, because real-world problems are inherently human problems that are documented by humans.

And once ChatGPT image comes out (don't think it's out yet), the minecraft player will be able to see the game and actually assign meaning to the objects, instead of abstractly figuring out the meaning of objects (which was how RL arguments used to work). For example, it will see an axe, a sheep, and a hungar bar. Axes are weapons, sheep are food, and hunger bars are used in video games. It will then figure out from human knowledge that it should use the axe to kill the sheep to get food to fill the hunger bar. That simple!

However, to surpass human performance in a task like a game, traditional RL-learning should probably be used. Nevertheless, pretraining with human data is a great idea. And the critic should maybe still be human. After all, these are human problems we are talking about.

What are word embeddings? Basically, take every word in the English language and assign it a multidimensional vector with 100-300 dimensions with two additional purposes: (1) vectors with similar meanings should be close to each other and (2) vector addition/subtraction should work, i.e. king-man+woman=queen. LLMs don't use already existing word embeddings like Word2Vec but generate their own based on context & attention.

Day 6 (Oct 22)


Looked at more Rust things, shuttle.rs (which seems really really interesting), some Kafka things, among others.