Halo Pt. 10: After Event Report

I just took Halo to its first event, Maker Faire Orlando 2018. The bot performed great, laying down hit after brutal hit, and eventually took second place. You can find all of the match footage below:

I’m going to take this post as an opportunity to summarize the good parts and bad parts in greater detail.

The good

Halo laid down punishment

It’s clear that halo could transfer energy. We were spinning slower than planned, but the improve rotational inertia made up for the lost energy. And the low speed and large weapon gave us enormous bite potential.

Halo took punishment

Even if most of that punishment came from itself, it’s still clear that the monobody design made for a tough bot. I still need to see what a good horizontal spinner would do to us, but the vertical spinners weren’t able to do much to the chassis. There just weren’t any good surfaces to attack.

The motors also proved tougher than I had originally thought. We didn’t have to replace a single one all competition, despite the direct-driven wheels.

Halo was tactically difficult to attack

We could attack in every direction at the same time. This was especially perilous for other spinners, which may not be designed to withstand powerful hits. Against wedge bots, we packed so much energy that even a successful deflection knocked our opponent away. This allowed us to spin back up in between contacts, preventing the wedge bot from pinning us.

Halo looked good

Battlebots is at least 50% showing off, and the LED display made a great show. The pinball action also made for a kinetic, exciting match.

The bad

The beacon has multipath issues

The sensor-fusion system detailed in this blog is great at filtering out spurious reflections, but what we saw in the match were consistent reflections. The result was that the beacon locked on to a random direction. I eventually got used to it and was able to adjust my driving, but it’s clearly not what was intended.

The tooth warps and cracks the chassis around it

Amazingly, the aluminum ring yielded before anything else did. It’s a testament to Pierce’s design and machining ability that it warped in this way, and still laid down punishment. Still, we should shore up the chassis there to prevent this from happening again.

The electrical components were not shock-proof enough

I need to more careful with my component selection and mounting strategies. More epoxy is also good.

The LEDs don’t have the best viewing angle

The LEDs are partially obscured by the modules on the circuit board, and It’s not always easy to see inside the ring. It would be better to find a way to make them shine outwards.

So what’s next?

We feel really good about this design, and other people seem to enjoy it too. So we will continue competing with it.

Of course, we still have a lot of work ahead of us. The ring is a total wash. even though it “survived”, the crack in the chassis would only continue to get worse. The circuit board took too much damage, and would need to be replaced even if we weren’t marking any changes. On top of this, we are looking at some big design changes.

We are changing the absolute reference sensor

I’m not going to remove the infrared beacon entirely. it barely takes any board space or processing power, so we might as well keep it as a backup. I’m currently investigating other methods that can provide an absolute reference, which if feasible will be detailed here.

There also may be a way to make the beacon better. If I offload some extra processing to the microcontroller, I can look for the strongest signal instead of just the first signal. This may be able to ignore reflections entirely.

We are changing the processor

The teensy served us well, but for mechanical shock reasons we should keep the number of plug-in modules to a minimum. I’m currently looking at the STM32 line as a replacement. Regardless of the choice, we will also port the code from arduino to straight C.

We are changing the ESC

The ESC’s we had were wonderful for what they were. They proved robust and powerful. However, if I tried to change the motor speed too much too fast, they would lose track of the motor and reset. This limited how fast I could translate.

I think the best thing to do is add sensors to the motors. This will give me much better control over the motor, improve spin up times, and let me win pushing matches.

The motors will stay the same, I can just add sensors to the outside of the case. But the ESC will have to support sensored operation. So goodbye ReadyToSky 40A.

We are changing the LEDs slightly

I’m investigating ways to make the LEDs shine outwards. I would also like to take advantage of a faster processor and go from five LEDs to seven. This will let me show more detailed images.

Halo Pt. 9: Accelerometer Calibration

In this post, I'm going to talk about how I calibrated the accelerometer in my bot. The calibration finds the relationship between measured acceleration and robot speed.

Some definitions

Measured acceleration comes into the processor in units of "LSB", or least significant bits. It's a jargon-y term that basically means raw data. We can convert LSB to g's by finding how many LSB are in the full-scale range of the accelerometer, then dividing by the full scale. But it's an unnecessary step since in the end we don't care about g's. We're going to directly relate LSB to robot speed.

As discussed in the previous post, we are expressing speed in terms of microseconds per degree to allow for more integer math. Here is what our angle calculation looks like:

code_new.PNG

Where robotPeriod is our measurement, in microseconds per degree.  The purpose of this calibration is to find the relationship between accelerometer LSBs and microseconds per degree.

The calibration method

We will define the relationship between accelerometer data and microseconds per degree by taking twin measurements from a real-world test. I use the test stand from part 6 to hold the bot, and then I slowly spin up and down the bot.

Meanwhile, the bot is measuring accelerometer data and beacon edge times. Beacon edge times tell us pretty directly our microseconds/degree, since they are measured in microseconds and we know our edges are 360 degrees apart. If you don't have a beacon on your bot, you can substitute with an optical tachometer.

Our bot decomposes the data and sends it to the controller (this is why we used XBee's as our radios).

robotPediod in this case represents our raw accelerometer measurement

robotPediod in this case represents our raw accelerometer measurement

On the controller side, we recompose the data, and send it to my computer over the usb cable.

data_receive.PNG

You can also do this by having a third XBee plugged into your computer via an XBee explorer, I just didn't happen to have one lying around.

On the host computer, I have a python script running that pulls in the data. It doesn't do much but put it into a file for later. 

writeToFile.PNG

I spun the bot up and down a couple times, and then shut down the test. Next, I wrote another python script to parse the data. Here are the points I gathered:

dataplot.png

As you can see, there are some good curves, but lots of artifacts! We will filter those artifacts out here. In the final code, the accelerometer and the beacon will work together to filter those out in real time.

Some easy artifacts to filter out include the ones at y=0 (when the accelerometer is reporting data but the beacon isn’t) and the ones at y= very large (which happens when the beacon misses several rotations. We add a simple high and low cutoff to filter those.

dataplot_artifacts1.png

Now we see the relationship we are looking for much more clearly. But, it’s in triplicate here. Every duplicate is due to the beacon missing N rotations. If the beacon missed every other edge, the us/deg would be double. If it missed twice in a row, triple, and so on.

We need to remove the duplicate curves, as they aren’t useful to us. To do this, we will use a piecewise linear cutoff.

The last artifact is the little “wing” at y=400. This one actually stumps me, I don’t know where it’s from. We’ll add an extra filter and move on…

dataplot_final.png

That looks much better. We need to use curve fitting to find the closest equation to represent this curve. Luckily, python can do that too!

curve_fit.PNG
finalplot.png

That looks pretty good! Lets take a look at the coefficients it gave us:

coefficients.PNG

This makes our equation:

equation.PNG

Ouch, that is pretty ugly. Luckily it only needs to run every time we get a new accelerometer measurement. And if it ends up being too slow in the future, we can use a lookup table instead.

Here is how the bot performs with the above equation and the improved accelerometer algorithm:

Halo Pt. 8: Improved Accelerometer Algorithm

Like the beacon in the post before, the accelerometer algorithm has some room to improve. The primary reason is the same as well: acceleration causes problems. And now, for the driest heading I have yet written on this blog:

What the algorithm does, in graphs

Like the improved beacon algorithm, there's really no better way to explain than with graphs. Here's what our bot is doing when it's spinning at a constant speed:

steadyState.PNG

Revisiting our first-order algorithm for calculating our heading:

linearAlgorithm.PNG

If we multiply the speed of the bot by how long it has been spinning that fast, we get how far it has spun. This is accurate so long as we are measuring infinitely fast, or so long as the robot continues spinning at the same speed.

Lets look at what happens when the robot accelerates:

accelerating.png

This case causes our algorithm to generate lots of error. It's not obvious where the error comes from, so lets visualize what our algorithm is calculating:

acceleratingBlocks.png

The red shaded area above is comprised of three rectangles, one for each measurement. It's width is the distance in time, it's height is the measured speed. What our algorithm tries to do is calculate the size of the area under the velocity line. This area equates to the distance traveled.

The simplest algorithm uses rectangles because they are easy to calculate area for (It's also known as the Riemann Sum). You simply multiply length (measured velocity) by width (time interval). But the graph above shows that when the velocity changes over time, some area is missed by the rectangles. All missed area represents error in the measurement, which causes drift.

If we want to reduce the error in our measurement, we can do two things:

1. Measure faster

If we measure infinitely fast, there will be no error. Or for a more likely solution:

2. Change how we calculate area

Instead of using rectangles, we can use a different shape that better matches our line. In the case above, the best match is a trapezoid, so we will use the trapezoidal rule here. The area of a trapezoid can be found with:

trapezoidBase.PNG

Where our trapezoid is defined as:

Trapezoid.png

The final equation, part A:

trapezoidalMelty.PNG

Given that you run this equation every time you get a new accelerometer measurement, the variables marked "i" are from the previous measurement and the variables marked "f" are from the recent measurement.

However, this equation can only run every time we get a new accelerometer measurement. If our code runs faster than our accelerometer (which is likely), then we will need to guess at our rotational velocity in between measurements. We can do this by borrowing an equation from the beacon algorithm:

linearExtrapolation.PNG

Remember this one? We can use it to extrapolate our current velocity given that we remember our last two measurements. Substituting in our variables:

The final equation, part B:

trapezoidalPredicted.PNG

Where (ω_1, t_1) is the earliest measurement, (ω_2, t_2) is the most recent measurement, Θ_2 is the angle at the most recent measurement, t_f is the current time, and Θ_predicted is where we expect the bot is pointing now.

Practical Considerations

In my implementation, I wasn't able to use the equations above as-is. This is due to the limitations of embedded arithmetic.

Time in my bot is measured in microseconds, but distance is measured in degrees. so to bridge the gap, my velocity term needed to be in units of degrees per microsecond. In practice, that is a tiny number. A speed of one revolution per second is 0.00036 degrees per microsecond!

Our microcontroller doesn't have a floating point unit, so any floating point operations are emulated in software. Consequently, floating point operations will be very slow. To remedy this, we can try to minimize our use of floating point in favor of integer math.

Instead of running calculations using degrees per microsecond, we can use microseconds per degree. This turns our 0.00036 into 2777.78, which (rounded to 2777) is much faster to do math with.

To support using inverse speed, however, we need to change our equations.

invertedOmega.PNG

This doesn't help much on it's own, but we can move the equation around a bit

finalInverted.PNG

To explain why this equation works better, lets say that the robot is moving at a constant 1000 rpm (around where we expect to operate) and the accelerometer is being sampled at 100Hz.

1000 rpm equates to 166.67 us per deg, which as an integer rounds down to 166. This is tau.

t_f - t_i is the time between measurements in microseconds. At 100Hz, that time is 10ms, or 10,000us. 10000/166 = 60.2 degrees.

I was able to do all of this math with integers only without losing too much precision. That would be impossible without redefining the equation in this way. This method can be used for final equations part A and B similarly.

Conclusions and Caveats

These equations should present much less drift than previously. Like for the beacon algorithm, we can trade complexity for precision.

The main caveat here is that error is still generated whenever the acceleration changes. This should not cause as much of an effect as acceleration did on the rectangular method. But if you truly want the best-of-the-best here, you can substitute the trapezoidal method with Simpson's rule to account for when the acceleration varies over time.

And you're only going to be as accurate as your calibration. See my next post for how I calibrated my bot.

Halo Pt. 7: Improved Beacon Algorithm

In my last post, I introduced two sets of algorithms. One to run a beacon-only system, the other an accelerometer system. The algorithms were fairly easy to work out the math for, and fairly easy to express in code.

But say you need better. Say you need a perfect flicker display, or the easiest-to-control meltybrain bot possible. If you're that kind of builder (I am, at least), then you're in luck. We can take these algorithms a step further. The math is harder (read: longer), so I'm splitting this up into a post for both sensing systems.

Beacon Sensing, part 2

In the previous post, I showed graphs with beautiful, straight lines. The original algorithm works great if this is the case. But what if it's not the case? What if, for instance, the bot is accelerating?

linearBeaconProblem.png

As you can see, the prediction diverges from reality. Our algorithm expects a line,  but the robot is accelerating. We get a parabola instead. The error resets at every measurement, so this may not be an enough of an issue for the typical meltybrain. But if you made it past the first paragraph of this post it must be an issue for you.

What if instead of keeping track of the last two edges, we keep track of the last three?

parabolic1.png

If we assume that all three points are on the same parabola, we can calculate the equation of the parabola and use it to find where we are now. The parabola equation is:

parabola.PNG

The equation has three constants we need to solve for. Fortunately we have three prior points to help us do that: (x1, y1), (x2, y2), and (x3, y3), where x1 is earlier and x3 is later.

parabolaEquations.PNG

Represented in matrix form:

parabolicMatrix.PNG

Solving for a,b,c:

solvedPArabolic.PNG

We can simplify by understanding that y1=-720, y2=-360, and y3=0.

simplifiedParabolic.PNG

Now lets pull the common "d" parameter off and substitute in more appropriate variables:

The Final Equations:

finalParabolic.PNG

Where t1-t3 are the measurement times (t1 being earliest), t is the current time, and Θ is the calculated angle. a, b, c, d only need to be recalculated at every beacon edge. Θ is calculated on every iteration. Here's what our graph looks like now:

parabolic2.png

These equations should work even if the bot is not accelerating. There will still be error in your calculation if your acceleration varies over time, but those events tend not to last very long.

An additional gotcha you should know about with either beacon algorithms (linear or parabolic) is that missed beacon edges can severely disturb your predictions. The bot will think it's spinning much slower since the edges it saw were so far apart. This disturbance lasts longer in the parabolic algorithm since the "memory" of the event lasts an additional 360º. Clever code can handle missed triggers, but it will need to be specifically coded in.

Halo Pt. 6: The Code

You can find the full code here. Read on for a discussion on how it works!

I had originally planned testing controls code with the robot on it's own two wheels. But after a catastrophic failure in one of said wheels, it was determined I would need to wait for my partner to machine some new ones out of aluminum. That would take too long, so I picked up some clamps and printed a test stand.

testSetup.JPG

Test stand in place, I started working on the code in earnest.

codeBlockDiagram.png

The overall code structure follows the block diagram above. I wont go into detail on how I implemented every feature. Hopefully comments in my code can handle that. Instead, I'll go through the architecture here.

The State Machine

The main state machine is composed of three states: idle, tank, and spin.

Idle is the "safe mode" the the robot reverts to if there is a problem or if the dead-man switch is released. In this mode the motors are set to 0 speed. I originally tristated the motor outputs using my voltage-level translator, but found it caused odd behavior in my ESCs so I now only manipulate the speed.

When the robot is enabled, it transitions to either "spin" or "tank" depending on the throttle slider. If the throttle is very near 0 it goes to tank, otherwise it starts up in spin. The robot reverts back to idle if the communications time out or the dead-man switch is released.

Tank mode allows the robot to be controlled like a ram bot. This is good for testing and as a backup in case for some reason spinning is ineffective. It's a bit of a misnomer, as I actually use arcade controls in this mode, but the name stuck.

While in spin mode, the robot begins reading from the accelerometer and the beacon to determine the real-time heading. It also powers the motors as allowed by the throttle command, and taking account steering pulses.

The Math

While the bot is in spin mode, it is constantly running math to figure out where it's pointing. I've been pretty nebulous about it thus far, but I'll do my best to explain it here. First I'll focus on using the infrared beacon.

Infrared Beacon Only Sensing

Say your robot is rotating, and you have a perfect beacon system that generates a positive edge (goes from low to high) once per rotation. You record the times of two edges. Since we know every edge must be 360º away from the last one, we can plot these edges on a graph

linear1.png

Looks like the robot is rotating at one revolution per second. Now, a little later, the robot tries to figure out where it's pointed. We only know the graph above, and the current time.

linear2.png

To find which way we are pointing now, we just extend that blue line, and find where it crosses the red dotted line.

linear3.png

Now for an equation form of this method (with thanks to wikipedia).

linearExtrapolation.PNG

X in our case represents time, and y represents distance in degrees. The earliest point measured is (x_k-1, y_k-1), the latest point measured is (x_k, y_k), and the current position is (x_star, y). We can simplify this a lot by realizing that our previous two points were 360º apart in y. For simplification, lets say point one is at (t1, -360º) and point two is at (t2, 0º). The current time is t3.

linearExtrapolation2.PNG

And that's our final equation. Here's what it looks like in code form:

linearCode.PNG

Where newTime is the current time, beaconEdgeTime[0] is the latest edge time, and beaconEdgeTime[1] is the edge time before that. I replaced the last minus sign with a modulous to make sure the angle never goes above 360º.

This system works great for if have just an infrared beacon. However, if your robot accelerates or decelerates during a revolution, the code above will not account for that and error will be introduced until the next beacon edge.

Accelerometer Only Sensing

Accelerometers measure the rotation speed of the bot by measuring centripetal acceleration (a_c).

centrepitalAcceleration2.PNG

Where omega is rotational velocity and r is the distance from the center to the accelerometer. This is unlike the beacon, which measures the rotational position of the robot directly. The benefit, however, is we can make measurements constantly, not once per revolution as before.

If the accelerometer is mounted against the ring wall as in our design, the z-axis (which looks radial to the ring) is your centripetal acceleration. In addition, the other axis have value for us. the axis axial  to the ring (up and down when the ring is flat on the ground) measures only gravity, so it can tell us if the bot has been flipped over. The axis tangential to the ring tells us the acceleration of the ring, which an advanced programmer can use like a gyroscope to improve our algorithm.

To turn a speed measurement into a position measurement, we need to use some concepts from calculus. We can relate speed and position using:

rotationalCalculus.PNG

Where Θ is angular position. These equations let us make a perfect measurement of distance from speed. Unfortunately, they require us having knowledge of the speed at all times, and we only have periodic measurements available. But if we replace the 'd's with deltas, we can get close to the correct value.

integralSimplified.PNG

Θf is the current angle of the robot, omega is the last measured speed, tf is the current time, ti is the time of the last measurement, and Θi is the angle of the robot at the last measurement point.

The final equation above is the most basic way to measure position using an accelerometer. The catch is that there was a certain amount of error introduced when we turned the d's into deltas. That error builds up and causes your angles to drift over time.

Here's a code implementation:

linearAccelCode.PNG

Hybrid Sensing

If you are looking for the best of both worlds, accelerometers and beacons pair together easily. The best way to do this is to code the bot as if it was accelerometer only, but 0 the angle every time you see a beacon edge. This allows for real-time speed measurement without error build up problems.

You can also use this system to prevent beacon miss-triggers from confusing your bot. This is possible by checking your accelerometer-measured heading every time a beacon edge is recorded. If the edge occurred way faster than the accelerometer was expecting, you can safely ignore it.

Halo Pt. 5: Writing Reliable Code

Before I get into the details of the software, I want to make some points about how this code needs to run. This code needs to be, if nothing else, bulletproof. If it hangs, crashes, or otherwise behaves erratically, it could cause you to lose the match. More importantly, it could seriously harm yourself or others.

Battlebots are not toys that can be used recklessly. Even beatleweights can break bones, and anything heavier can easily kill someone.

Meltybrains put your code directly in charge of the drive motors. If you screw up, the bot could take off unexpectedly and hurt someone.

As an example, an early version of our handheld controller had the dead-man switch wired in a normally-closed configuration, such that holding the switch opened the circuit. This was a huge mistake; if something else caused an open-circuit (say a broken solder joint), the robot would become enabled!

Sure enough, the controller ended up developing an intermittent connection that caused the robot to suddenly enable and drive at random. Luckily spin mode wasn't coded in yet, so no damage was done. But it was a scary reminder of why you need to really think through the systems you're building.

How to build bulletproof code

That header opens up a massive can of worms. This is a constant topic in all of computer science, and I can't possibly hope to do it justice here. What I can do is give some practical tips that I've utilized to make my own bot safer.

I'm going to focus on arduino throughout this post. I used arduino, and I suspect that most people reading this with the intention to build their own Meltybrain will likely do the same.

Synchronicity

Say you are listening for messages coming from your controller. They come in on your serial line, upon which you figure out what it means and do something with it. There are three ways to code it:

Way 1: Blocking

synchronous.PNG

println() reads in bytes until it reads in a newline character, after which it returns the full array of bytes as a string. This is an extremely simple method, so most new coders start here. The big problem comes from what your code is doing while println() is waiting for the newline: nothing! Your code will happily sit there forever waiting for the newline to arrive.

This method is a synchronous, or blocking method. Meltybrains need to crunch numbers much faster than your communicators and sensors can give them to you, so doing nothing while things finish wont get you far.

You can get this method to work by using an RTOS, but that brings its own set of headaches, so I won't recommend it to a new builder.

Way 2: Polling

polling.PNG

This method periodically checks to see if there are bytes available. If there are, the bytes are downloaded and stored for later. If a newline is received, all of the stored bytes are sent to a function for processing and the buffer is reset.

This is called a polling loop, and is our first asynchronous method. It's also the method I use most in our bot. It allows you to do other things while waiting for a slower event to complete.

The downside is that if something else blocks your code (forces you to wait), you wont receive any messages until your code un-blocks. The best way to make this style work is to make sure none of your code can block for too long. This is sometimes difficult, so we have a third option:

Way 3: Interrupts

interrupt.PNG

Notice that the code looks pretty similar to way 2. The primary difference is that instead of placing it in loop(), we have placed the code in serialEvent(). serialEvent() is a special name that tells Arduino that this code is an interrupt service routine, or ISR. ISR's are sections of code that run when something happens. The function above runs any time our device receives bytes over the serial connection.

Since the byte handler automatically runs, we don't need to constantly check if we've received any bytes as in the polling example. This can save us even more time to do more important things. Also, the processor drops everything it's doing to run the ISR. So we will receive bytes even if the code is currently blocked! The original code resumes after the ISR has completed.

That last note is also the weakness of this strategy. If you receive too many interrupt events or your ISR takes too long to run, you can end up spending all of your time in your ISR instead of running other code. So you need to manage your interrupts carefully, and limit how much code you put in your ISR.

Watchdog timers

Despite our best intentions, "stuff" happens. Our code can crash or end up in weird places, and we may be powerless to save it. There is something we can add such that if our code goes kaput, the robot still safely carries on: watchdog.

Watchdog is a timer that constantly counts down. If it ever hits zero, your processor resets. Normally that's bad, but you can prevent it from resetting by "feeding" the watchdog. This resets the timer, but doesn't stop it. As long as you keep feeding the watchdog, your processor wont reset. But in the off chance that your code freezes, the watchdog will "get hungry" and restart your processor. You can also make it do things right before the processor restarts, such as turn off your motors. Useful! Instead of getting into it here, I'll point you at this excellent writeup of implementing watchdog in arduino.

I highly, highly recommend you implement a watchdog in any battlebot that runs code. If your processor freezes without one, it's likely you wont be able to disable your bot. This is incredibly dangerous! No one is a good enough coder that watchdogs aren't useful.

 

Halo Pt. 3: The Big Idea

Our robot is called Halo and it is a meltybrain.

DSC00208.JPG

There are a few reasons we went with this design.

Reason 1: Physics

Rotational inertia of a ring

Rotational inertia of a ring

The rotational inertia of a ring is double the rotational inertia of an equivalently heavy and large disk. And with the name of the game being more energy, the gain in rotation inertia can't hurt.

Reason 2: Mechanical simplicity

Our chassis is a single piece of aluminum with some stuff bolted to it. While the ring is a bit tricky to machine (my partner will get into that in a later post), the overall assembly is very simple.

This goes hand-in-hand with mechanical strength. The ring will be extremely tough to break, while keeping critical electronics efficiently protected.

Reason 3: Size

With the weight being concentrated on the ring edge, the radius of the robot can be a bit larger than is typical for the weight class. Given that rotational inertia has an r squared relationship, that gives us even more energy to work with.

There are tradeoffs of course

Space

Our only protection is the ring. If any components extend too far inward, they become much more vulnerable. More than anything, this hurts our

Drive design

Due to the scarcity of mounting surfaces, the drive motors must be rear mounted. And given the lack of bearings, all lateral force on the wheels transfers directly to the rotor of the motor. Lateral force on rotors is generally a very bad thing and should be avoided. For us, we can only reduce it by shortening the moment arm. That means no gearbox; the wheels must be direct-driven and kept close to the ring edge. Even still, if the robot gets launched and falls directly on the wheels, the lateral force on the rotors will be huge.

Direct driving wheels is hard, especially given the unusual amount of power our drive motors need to put out. Large motors with very low kV values are needed.

And with the wheels very close to the ring edge, we are vulnerable to undercutting spinners.

Accelerometer Saturation

Most meltybrains use accelerometers as their main sensing element. Typically, the accelerometer is kept close to the center of rotation so that the g forces on it don't get too high. But for us, we can only mount our accelerometer very far away from the axis of rotation. At full speed, we expect our accelerometer to experience over 300g's!

Over the next couple posts I'm going to dig into the details of Halo, starting with the electronics.

Halo Pt.2: Meltybrain

Meltybrain is the name of a particular style of horizontal spinning battlebot where the entire robot spins on its drive wheels.

If you want to put the most energy possible in your spinner, you build a Meltybrain

This is down to physics, some simple and some tricky. First, the energy of a spinner:

Where I is rotational inertia, and ω is rotational velocity

Where I is rotational inertia, and ω is rotational velocity

If your looking to store more energy, the most obvious thing to increase is your rotation speed; it has a square relationship! Unfortunately, there is a limit to how fast you can go, and it's not just the max speed of your motor.

Bite

In battlebots there is a term called "Bite", which refers to how efficiently your weapon transfers energy to you opponent. You can think of it as difference between a weapon skipping off or slamming in. This match is great at showing both scenarios:

Strategies to improve byte

  1. Approach your opponent fast
  2. Aim for corners, avoid flat faces
  3. Reduce the number of teeth on your spinner
  4. Reduce the speed of your weapon

Yep, point 4 is what really limits our rotation speed. Most spinners can run at 10,000 RPM, but in reality only run up to 5,000 RPM.

So what now?

If we can't increase speed any more, rotational inertia is the only thing left. The equation for rotational inertia depends on the shape of the spinner. For the most common case, the spinning disk, it is:

Where m is mass and r is radius

Where m is mass and r is radius

While it's true that r has a square, in practice it all comes down to weight. The more mass you are willing to put in your spinner, the more inertia it holds. For most bots this becomes a tradeoff between spinner power and pretty much anything else.

And then there's meltybrain

If your spinner is your entire robot, there is no longer any tradeoff between spinner weight and things like armor, drive motors, batteries, etc. Any mass that protects your robot or makes it go faster also contributes to a more powerful spinner. This is why saying meltybrains are the most powerful spinners (in theory) is objectively true. They can hold the most energy without sacrificing bite.

Great, but how does it move forward?

This is the tricky bit. You still need to move laterally about the field, while it spins. To accomplish this, the simplest way is to pulse the power of your drive motors at just the right place every rotation. The aggregate is that the robot drifts in whatever direction you're pulsing. You only need one wheel to do this!

In order to pulse the motors at the right time, the robot needs to constantly know what direction it's facing. Robots can't know this without some kind of sensing element and a processor to crunch the numbers.

Sensors

Accelerometer

The accelerometer can be used to directly measure the speed of rotation by measuring the centripetal acceleration. The relationship is below:

centrepitalAcceleration.PNG

Knowing the rotational speed, your robot can figure out it's direction by assuming "since I have been moving this fast for this long, i must have moved this far and am now pointed this direction." The catch is, every measurement has a small error. These errors build over time and cause the forward heading to drift. This can be corrected for by the driver, but the more drift, the harder it is.

Gyroscope

A gyroscope placed in the center of the robot can measure the speed of the robot indirectly by measuring changes in the speed. The processor can add up these changes to get speed. Because the gyroscope has an extra layer of indirection from the accelerometer, it generates more error. It can be used in conjunction with an accelerometer, however, to reduce the error caused by rotational acceleration.

Encoder

The encoder can be used to measure how much your wheel has rotated. This can be used to directly measure how far your robot has rotated. The main issue is any time your wheel slips, you generate error. Wheel slip happens any time blows are traded, or passively if the wheels are having trouble gripping the floor.

Light Beacon

If you put a light source on your controller (either LEDs or a laser pointer), and you point your light source at the robot, the robot can recieve the light source. Since the light is directional, the bot will only see the light when it is facing a particular direction. This can indicate to your bot where the forward direction is. This is an absolute reference, meaning it does not drift! the only catch is, light bounces. So get in the way of the wrong shiny surface and your sensor can still get false triggered.

If you're going to do this, i recommend using a modulated IR source and reciever. IR so that your rival driver doesn't get distracted (something safety staff look down upon), and modulated so that bright lights and flamethrowers don't false trigger your sensor.

Processing

Sensors are useless without something to read the data and crunch the numbers. Most bots use just an RC receiver module. This wont cut it for a meltybrain; you also need a controller that can run code. I wont call out specific modules here, because there are a lot of options. But some notes to consider:

The processor needs to be reasonably fast

If the robot is spinning at 5,000 RPM, it doesn't leave your processor much time to think. How fast your processor needs to be depends on what it's doing. The most minimal design can probably make do with 8MHz, but if you want to use advanced sensing solutions or flicker displays, think about getting a faster one.

It needs to survive being in a battlebot

smaller and lighter is better

Heading Indication

It's great if your robot knows which way is forward, but your driver needs to know too! The easiest way is to have an LED visible on your bot that blinks whenever your robot faces forward. Persistence of vision will cause you to see a streak on the forward side.

Code

I'll post my own code in a later post. For right now, OpenMelt is a great reference.

Halo Pt.1: A first adventure in combat robotics (Overview)

A friend of mine and I have been developing a battlebot over the past several months. I thought it was about time I started documenting the project.

This is the first attempt for both of us, so we aren't attempting to play with the 220lb monsters you see on TV. Luckily, there are non-televised events with weight classes all the way down to 150g. We settled on the 3lb beetleweight class as a compromise between cost and "kickass".

In any weight class, there's a general rock-paper-scissors relationship between three robot archetypes:

Wedges

Simple rams on wheels. Most are designed to be as low and indestructible as possible. These tend to get beat by

Lifters

These feature a large lifting element. They seek to toss their opponent out of the arena or simply on their back. Lifters are countered best by

Spinners

The scariest of the lot. These bots store as much energy as possible in a spinning element, and then try to release it as fast as possible in their opponent. These are most often beat by wedges capable of withstanding the blows.

There are a few exceptions to these trends (see: blacksmith), but the best advice for newbies like us is to pick and archetype and try to design around its weaknesses. When narrowing down our bot designs, Pierce and I were weighing "awesome factor" pretty heavily.

So we picked spinner

Spinners are harder to build of course; poorly designed ones tend to destroy themselves more than their opponent. But They are also perfectly capable of launching both bots across the arena after a good hit (especially in the beetleweight class).

Spinners come in a wide variety, but the general types are:

Verticle Spinners

Can either be drum style (see minotaur) or disk style (see counter revolution). They generally try to knock their opponent into the air with an uppercut.

Horizontal Spinners

Since in this style the spinner is parallel to the floor, builders have more room to be creative. Most common is the spinning bar style, as popularized by the legendary Tombstone. After that, shells are fairly common, and then the occasional ring spinner.

And then there's Meltybrain

We chose meltybrain. Meltybrain is a style where the robot uses its drive wheels to spin in place. The result is that the entire robot frame becomes a horizontal spinner disc. This simplifies all aspects of the mechanical design, but with a catch: the robot still needs to be able to move laterally. But if it's spinning in place, how does it go forward? what even is forward anymore? The solution is to use a fast microcontroller, robust sensing, precise timing, and some math. This is quite a change from the dumb RC receivers most bots use, which is why most builders don't attempt this style. But if mastered, meltybrain can unlock massive damage potential while being tougher for it.