本文共 3620 字,大约阅读时间需要 12 分钟。
This is a hack for the popular game, Flappy Bird. Although the game is on Google Play or the App Store, it did not stop folks from creating very good replicas for the web. People have also created some interesting variants of the game - and
After playing the game a few times (read few hours), I saw the opportunity to practice my machine learning skills and try and get Flappy Bird to learn how to play the game by itself. The video above shows the results of a really well trained Flappy Bird that basically keeps dodging pipes forever.
Initially, I wanted to create this hack for the Android app and I was planning to useto get screenshots and send click commands. But it takes about 1 - 2 seconds to get a screenshot and that was definitely not fast or responsive enough.
Then I found 's game engine, Omega500 and his of Flappy Bird for typing practice. I ripped out the typing component and added some javascript Q Learning code to it.
Here's the basic principle: the agent, Flappy Bird in this case, performs a certain action in a state. It then finds itself in a new state and gets a reward based on that. There are many variants to be used in different situations: Policy Iteration, Value Iteration, Q Learning, etc.
I used Q Learning because it is a model free form of reinformcent learning. That means that I didn't have to model the dynamics of Flappy Bird; how it rises and falls, reacts to clicks and other things of that nature.
is a nice, concise description of Q Learning. The following is the algorithm.
I discretized my space over the folowing parameters.
For each state, I have two possible actions
The reward structure is purely based on the "Life" parameter.
The array Q is initialized with zeros and I always chose the best action, the action that will maximize my expected reward. To break ties I chose "Do Nothing" because that is the more common action.
Step 1: Observe what state Flappy Bird is in and perform the action that maximizes expected reward.
Let the game engine perform its "tick". Now. Flappy Bird is in a next state, s'.
Step 2: Observe new state, s', and the reward associated with it. +1 if the bird is still alive, -1000 otherwise.
Step 3: Update the Q array according to the Q Learning rule.
Q[s,a] ← Q[s,a] + α (r + γ*V(s') - Q[s,a])
The alpha I chose is 0.7 because we have a deterministic state and I wanted it to be pretty hard to un-learn something. Also, the dicount factor, lambda, was 1.
Step 4: Set the current state to s' and start over.
I'd like to give a shout out to for creating the Omega500 game engine and making it open source!
转载地址:http://dstzn.baihongyu.com/