Robocode

Robocode

Robocode

this page isn't finished..

Periodically I rediscover Robocode, and have been doing for over 10 years.

On the face of it, it's pretty simple really. You have to make your robot move and shoot, better than your competitors.

I'm sure there's some eastern teaching that speaks of the hidden depths of the seemingly simplistic, but Robocode is definately a Pandoras Box type situation. A bit like the Rubik's Cube.

I think it generally goes like this :

  1. Look at the source code for "walls"
  2. Laugh at it's complete simlicity
  3. knock up a robot in 10 minutes
  4. get beaten 94 to 6
  5. re-open the source code to make a few small tweeks
  6. look up to find 2 weeks has passed

At some point I'll publish my whole robot here, but for now I'll just discuss some of the things I've learned.

setting goals

Robocode has code size categories (micro, mini, mega)
Most people ignore this because the obvious task is to write the best robot, the one that beats all the others, irrespective of code size.
Now Robocode was intended originally to be a Java learning aid, so (as an Enterprise Java developer) I find size categories hideous and wrong. In order to try and restrict the size of code people take shortcuts and write java code that might be considered improper. No coding to interfaces, no use of standard patterns etc.
My temptation then is to do what everyone else does and just write the best bot I can regardless of code size.
But a part of me rebels.

Your first task is to beat all the standard bots, but then obviously the ultimate challenge is to beat the best.
The current roborumble rankings could be found here at the time of writing
But during the course of writing and researching robocode I came to use DrussGT as the benchmark for 'best' robot.
Then I found Aristocles which was written by someone who was very active in Robocode named 'Pez'.
Bearing in mind what I've said above about robocode having the de-facto aim of writing the best SMALLEST bot you can..
Aristocles is and open source microbot. It is 160 lines long, the source file contains 5679 characters including the package declaration and imports.
There are better microbots but I consider this one of the best bots ever written, and I've adopted this as my initial target.
If I cannot beat a bot that is only 160 lines long, then I may as well give up. But let me tell you, it's not easy.

targeting

the task

Aim for the place you think the enemy will be when your bullet arrives there
Simple really, but there are a staggering number of approaches to solving this simple problem.
Something to know right away : You can fire a bullet with varying power. The more power you fire with, the slower your bullet will travel, but the more damage it will do.

dead rekoning

The standard bot "Walls" fires at where it saw you last, and it will take you several weeks to write a bot that can beat Walls. If you're just starting out, just use this strategy. When you recieve an onScannedRobot, simply shoot at that position. You will be amazed at how effective this can be, especially in melee situations when your enemies are close to you.

traditional prediction
linear targetting
linear

For some far smarter insight see here
Crash course in mathematics time.
If the enemy is moving with constant speed in a straight line, then we can predict their future position using two very simple equations :
x = ax * t + bx and y = ay * t + by (see below for explanation)

lets imagine that 10 ticks ago the enemy was at x=10, y=10, now they're at x=20, y=20, surely in 10 ticks from now, they'll be at x=30,y=30?

Look at the graph (figure 1) which shoes the x coordinate of the enemy plotted against time (Imagine we have an identical graph for the y coordinate).
At time = 30 we have three recent observations of the enemy's previous positions. We calculate an appropriate bullet power (see below) which tells us we need a prediction for 20 ticks in the future, that's how long the bullet will travel. We calculate that the enemy will be at x = 1 * 50 + 0 and y = 1 * 50 + 0 We point the gun at that spot and fire with the correct power, then we cross our fingers.

the "a" factor
"a" (ax and ay) is the slope of the graph, so it is "rise over tread" (change in x/change in t). Distance divided by time is speed (or 'velocity').
In the case of our graph 10/10 = 1
When "Walls" is travelling across the bottom of the battlefield, ax will be 8 (walls travels at a speed of 8), and
ay will be 0 because walls is not travelling in the vertical axis, only horizontally.
In robocode we are given the velocity of our enemy with every sighting, but it is their abolutely velocity. What we need here is only the 'x' or 'y' component of their velocity.
The heading of the robot is an angle in the 'vector' triangle who's hypotenuse length is 'velocity', so we can calculate the length of the other two sides (velocity x, and velocity y).
To do this we use a trignometry rule (soh,cah,toa)
The x component of velocity is velocity * Math.sin(headingInRadians)
The y component of velocity is velocity * Math.cos(headingInRadians)

the "b" factor
As mentioned in the chart, "b" (bx and by) is the point at which the line crosses the vertical axis.
So in the case of the "walls" bot travelling in a straight line accross the bottom of the battlefield :
by will be 18 (the width of the robot), because the bot is near 0, at the bottom of the y axis heading in a straight line that cuts the y axis at 18.
if walls was travelling accross the top of the battlefield (and the battlefield is 600 high), then by would be 600-18=582 etc.
You can calculate the value of "b" at any observation by rearranging the equation for b.
bx = x - (ax * t). We can see from our graph that ax = 1, so at x = 30, bx = 30 - (1 * 30) = 0)

So now we should be able to calculate the position of an enemy robot at time in the future as follows :

				
static Point2D.Double predict(long time, Point2D.Double pos, double velocity, double heading, long currentTime) { 				
	double ax = velocity * Math.sin(heading);
	double ay = velocity * Math.cos(heading);			
    double bx = (pos.x) - (ax * currentTime);
    double by = (pos.y) - (ay * currentTime);
    long t = currentTime + time;
    double px = ax * t + bx;
    double py = ay * t + by;    
    Point2D.Double ret = new Point2D.Double(px,py);
    ret = clamp(ret,ABSFIELD);
    return ret;
}	

So in the above :
time is the number of ticks in the future you want a prediction for
pos is the current position of the enemy
velocity is the current velocity of the enemy
heading is the current heading of the enemy
currentTimeis the absolute time at which the measurements above were taken (the actual game time)
Notice the 'clamp' method here
It is possible for our straight line equations to predict a location outside the battlefield. So you will have to limit (or 'clamp') the predicted values of x and y such that they are inside the battlefield. But what will the enemy actually do when they encounter the wall? Sit there? reverse? Predicting a robot's behaviour when close to the walls is quite difficult to say the least.
A simple clamping method is shown below :


static Point2D.Double clamp(Point2D.Double pos, Rectangle2D field) {
	return new Point2D.Double(Math.min(Math.max(pos.x,0),field.getMaxX()),Math.min(Math.max(pos.y,0),field.getMaxY()));
}

I tried this, it doesn't work! (AKA how to calculate firepower)

Take a deep breath, we're going down the rabbit hole.
So you've noticed that at distance, sometimes we miss behind the enemy or in front of it.
The problem is not the linear prediction algorithm, but the fire power calculation algorithm.

Most people do something like this :

  1. get a predicted location based upon the time it would take a bullet to hit the enemy at it's current location
  2. adjust the firepower to hit the enemy at the predicted location
  3. turn the gun to the predicted location
  4. fire with the power we just calculated above
That seems reasonable doesn't it? We know a bullet is taking 21 ticks to hit the enemy at the moment, and we know we can speed or slow a bullet if we need to
So we calculate a position for 21 ticks in the future, and speed or slow the the bullet up a bit according to our prediction.
No matter how reasonable it sounds, no matter how much we try, it just doesn't work!

So now you need some more maths.
We actually have two 'functions'. One which predicts the enemy's location at a point of time in the future.
And one which predicts the amount of bullet power required to reach that location, at that point of time in the future.
What we need is some way of finding the point at which those two functions 'converge'. We need to find the 'root' of those functions.
To do that we need to use something called the Secant Method
I discovered this by reading this great article
I don't fully understand this so I can't explain it to you, but here is the code for a 'finished' linear targetting gun that uses the methods we've described above
this gun will hit 'walls' almost every time, you will see for yourself the point at which it does not hit walls, and you can easily amend the code to fix the problem
notice the required 'Enemy' simply holds certain pieces of information recieved at the last onScannedRobot(ScannedRobotEvent e).
also double POWER_CONSTANT = 400d; or whatever you decide to make it, and ABSFIELD is a Rectangle2D which describes the size of the battlefield.

		
class LinearGun {
	final static long TICK_RANGE = 20;
	final static int ITERATIONS = 15;
	final static double ACCURACY = 0.01d;
	void fire(Enemy e) {		
		long ct = secant(time(e.distance),e);
		Point2D.Double p = predict(ct,e);			
		double bp = power(range(mePos,p));
		double calculatedBearing = bearing(mePos,p) - getHeadingRadians();
		double turnGun = Utils.normalRelativeAngle(getHeadingRadians()-getGunHeadingRadians() + calculatedBearing);
		setTurnGunRightRadians(turnGun);	
		if (getGunHeat() != 0 || Math.abs(getGunTurnRemaining()) > 2||bp<Rules.MIN_BULLET_POWER) return;	
		setFire(bp);			
	}	
	double f(long time,Enemy e) {
		Point2D.Double d = predict(time,e);
		double r = range(mePos,d);
		return r - velocity(power(r)) * time;
	}
	long secant(long time,Enemy e) {
		double t0=time - (TICK_RANGE/2);
		double t1=time + (TICK_RANGE/2);
		double X = t1;
		double lastX = t0;
		int iterationCount = 0;
		double lastfX = f(Math.round(t0),e);			
		while ((Math.abs(X - lastX) >= ACCURACY) && (iterationCount < ITERATIONS)) {
			iterationCount++;
			double fX = f(Math.round(X),e);
			if ((fX-lastfX) == 0.0) break;
			long nextX = (long) (X - fX*(X-lastX)/(fX-lastfX));
			lastX = X;
			X = nextX;
			lastfX = fX;
		}
		return Math.round(X);			
	}
	Point2D.Double predict(long time, Enemy e) {
		double ax = e.velocity * Math.sin(e.heading);
		double ay = e.velocity * Math.cos(e.heading);			
	    double bx = (e.pos.x) - (ax * e.time);
	    double by = (e.pos.y) - (ay * e.time);
	    long t = e.time + time;
	    double px = ax * t + bx;
	    double py = ay * t + by;    
	    Point2D.Double ret = new Point2D.Double(px,py);
	    ret = clamp(ret,ABSFIELD);
	    return ret;
	}			
}	
static double power(double distance) {return Math.min(POWER_CONSTANT/distance, Rules.MAX_BULLET_POWER);}
static double velocity(double power) {return 20 - Rules.MAX_BULLET_POWER * power;}
static long time(double distance) {return (long)(distance/velocity(power(distance)));}
static double range(Point2D.Double f, Point2D.Double t) {return Math.sqrt((t.x-f.x)*(t.x-f.x) + (t.y-f.y)*(t.y-f.y));}
static double bearing(Point2D.Double f, Point2D.Double t) {return Math.atan2((t.x-f.x),(t.y-f.y));}
static Point2D.Double position(Point2D.Double currentPos, double angle, double length) {return new Point2D.Double(currentPos.getX() + Math.sin(angle) * length, currentPos.getY() + Math.cos(angle) * length);}
static Point2D.Double clamp(Point2D.Double pos, Rectangle2D field) {return new Point2D.Double(Math.min(Math.max(pos.x,0),field.getMaxX()),Math.min(Math.max(pos.y,0),field.getMaxY()));}

You can see here that you can keep this class and just overload the predict to make it work with any form of targetting you like. We'll be doing that soon.
Anyway, back to linear targetting itself
Of course, if you only have two sightings then everything looks like a straight line! What if your enemy is actually moving in a circle the circumference of which just happens to pass through your two points? (see Oops! in figure 1) For any sort of accurate prediction you need at least three previous actual sightings of your enemy. This means that you have to maintain some sort of collection or robocode sighting events (get out your book on Java Collections!)

The practice of trying to produce an equation that best 'fits' a set of coordinates is known as 'curve fitting'.
A very effective curve fitting method uses a mathematical technique called 'least squares linear regression' to calculate the a and b factors of the linear equation (y = ax + b) based on any number of previous observations. Below is an example piece of code for that.


public Point2D.Double predict(long time, Enemy e) { 	
	double tbar=0,xbar=0,ybar=0;
	int size = 0;
	for (POI s:e.getSubPOI()) {
		size++;				
		tbar += s.time;
		xbar += s.pos.x;
		ybar += s.pos.y;
	}			
    tbar/=size;xbar/=size;ybar/=size;
    double stt=0,stx=0,sty=0;
	for (POI s:e.getSubPOI()) {					
		double tf = s.time-tbar;
		stt += tf*tf;
		stx += tf * (s.pos.x - xbar);
		sty += tf * (s.pos.y - ybar);
	}
	double ax = stx / stt;
    double bx = xbar - ax * tbar;
	double ay = sty / stt;
    double by = ybar - ay * tbar;
    long t = e.time + time;
    //calculate x and y 
    double px = (ax * t + bx);
    double py = (ay * t + by);    
    return clamp(new Point2D.Double(px,py),ABSFIELD);
}

You can see that this method (ignoring the squares part) is actually doing what we did above, calculating the a and b factors. But it does this on multiple sightings rather than just two. We've expanded the Enemy class to have an internal List of previous sightings which we're calling POI (point of interest), but a POI is actually just an Enemy object itself.
This method has the advantage of 'smoothing' the factors, such that if the enemy is travelling 'almost' in a straight line the prediction is more likely to be accurate. You can also modify this code fairly easily to produce statistics on how well the straight line fits the data. Then you can use that information to decide whether to trust these predictions at all.

I should mention here, that the method of linear targetting preferred by most people is this one (taken from the wiki linked to above) :
code here

circular targetting

What happens if you discover that the enemy is not moving in a straight line at all? In fact your straight line predictions are next to useless? All is not lost. Your next step is probably to assume that the enemy is moving in some form of arc (a circle). It is possible to calculate the centre point and radius of a circle given only 3 x,y coordinates. But although that is fairly easy, it is usually so imprecise as to be useless.

Lukily for us Robocode provides us with the enemy's heading on every sighting. If we can work out what the average change in heading per tick is, we can calculate the position of the enemy using the following method (that overloads the predict method in the LinearGun class above)

		
Point2D.Double predict(long time, Enemy e) {	
	//deltaH is calculated as below, in the onScannedRobot event
	//deltaH = (e.getHeadingRadians()-previousHeadingRadians) / (e.getTime() - previousTime);
	double ph = e.deltaH * time;
	double rad = (e.deltaH==0)?0d:Math.abs(e.velocity/e.deltaH);
	setDebugProperty("dH","" + e.deltaH);
    setDebugProperty("rad","" + rad);		    		    
    double px = e.pos.x + (Math.cos(e.heading) * rad) - (Math.cos(e.heading + ph) * rad);
    double py = e.pos.y + (Math.sin(e.heading + ph) * rad) - (Math.sin(e.heading) * rad);
    return clamp(new Point2D.Double(px,py),ABSFIELD);		    
}

This method (when inserted into the LinearTargetting class above in place of the linear prediction method) will hit SpinBot pretty much all the time. Note that when the change in heading is zero, radius is set to zero, and we begin firing in 'dead on' mode. If you wanted you could use a radius value of zero to switch to linear firing mode.
If you do this, you will find that you can hit all the standard robots, pretty much all the time.
You don't need to calculate the radius, there are alternative calculations. But you will find it useful ant it is no extra code. If an enemy is actually circling you, you can usually tell by checking the radius of it's orbit, which will likely be near to the distance between you and the enemy.
Notice also that the deltaH calculation does not 'smooth' the transition between 2 pi and 0 pi

If the enemy is travelling in a genuine circle then we should expect it's change of heading / time to remain constant (linear). And we know from above, how to predict things that are linear.
If we changed our chart in figure 1, such that it was plotting time against heading, we should get a straight line allowing us to predict heading against time using h = ah*t + bh
We would have to ensure that each time heading returned to zero, we incremented the actual heading prediction by 2 pi otherwise the line wouldn't be straight.
I experimented with this a few times, but never manged to get it working properly. As you'll see before, it's largely unecessary to improve circular targetting anyway.

complex curve fitting

So the enemy may be moving in something that looks like a circle, but in fact is more like a parabola, or complex sine wave. Wouldn't it be nice if you could fit all their previous observations into an equation that absolutely predicted their movement with total certainty? What if instead of them moving in y = ax + b they're actually moving in y = ax2 + bx + c (non-linear, quadratic)

Well it is possible to fit such curves using least squares regression, but it's difficult and requires the use of some large classes to do matrix manipulation.
It's also unlikely that you can fit the complex behaviour of bots to a simple mathematical equation, no matter how many degrees the polynomial has.
Though in fact many simple wave surfers do actually move in a predictable wave, with a wavelength dictated by your firing rate, so a simple quadratic may well predict their behaviour?

virtual bullets

You cannot know for sure where the enemy will be. But you can know with absolute certainty if you have hit the enemy, and where the enemy was when you hit it.
If you kept a record of every bullet you fired along with the parameters you used to predict that shot, you could then see which predictions actually worked.
Once you've done that you realise that you don't actually need to fire any bullets. You can fire 'virtual bullets' and just keep checking them to see which ones would have hit. You could then update your calculations.
In practice though, virtual bullets are not very succesful because the enemy only reacts to your actual bullets, the rest of the time the enemy is virtually motionless.

chasing bullets

You notice when you're battling a 'wave surfer' that it keeps changing direction (reversing). When it does that all your predictions (linear, circular etc.) about it's future location are useless. You feel that you can probably know the point at which the enemy is going to change direction. It's just before the bullet your fired arrives at the location you predicted. Very frustrating.
But your current predictor is not taking account of shots you've fired, it is simply trying to predict location based on the linear or circular movement of the enemy. What if you kept a record of when the enemy begins to reverse it's direction in relation to how close your bullet is to the enemy.

You could then follow up your shots with a 'chasing bullet' aimed at the location the enemy would be if they 'reversed' to avoid your first shot.
If you fired the follow up shot with less power, it would arrive at the alternative spot at the same time that your first shot arrived.
This technique can work extremely effectively against certain enemies.

waves

You can also know where the enemy was, when you missed it!
When you fire a bullet you could create an object that expands a circle centred on the position you were at when you fired.
A method on that object can return to you a circle (Ellipse2D object) the radius of which is the distance your bullet has moved since you fired it.
You can periodically check to see if the enemy is now within that circle.
The location of the enemy at the point which your circle touches them is the position you SHOULD have fired at. Armed with this information you can adjust some 'factors' of your targetting so that your prediction fires at a slightly different location next time.

Statistical (aka : aim factor) targetting

Armed with wave information telling you where you should have fired last time, you can now create a mechanism for firing at the most likely future position.

There is a limited range of bearings that the enemy could be at because there is a maximum speed that the enemy can move at.
The enemy also has a known width, hitting the enemy anywhere along it's length is fine, you don't have to aim dead centre.
This means that you can 'quantise' or break up the range of possible bearings (the escape angle) into discrete chunks (usually called 'bins'). Once you have done this you can start to perform some sort of statistical analysis of which 'bin' the enemy is in most frequently. Essentially we've reduced the scale of the challenge from infinite (continuous) to finite (discrete).

Let's start by assuming that we are the maxiumum possible distance from the enemy (the diagonal distance of the battlefield) and that we are firing with maximum power.
We can now calculate the total arc length, divide it by the robot size, and calculate how many bins we need.

Of course it's no good just firing at the most visited bin if your bins are a simple division of the possible range of bearings, because as you get closer to the enemy they will appear to be in a bin closer to the centre bin.
If we also calculated the maximum distance the enemy can be from us (the maximum radius of their orbit) we can also divide that by the size of the enemy robot. This process is usually called 'segmentation'. We segment the distance, and each distance segment contains a set of bearing bins.

As an absolute minimum you need to segment on distance, but there is nothing stopping you from segmenting on anything you like.
A useful additional segmentation might be 'velocity'. Because if your enemy is slowing it's likely to be a precursor to a change in behaviour.
A lot of people also segment on the amount of time that has elapsed since the enemy last came to a stop, or changed direction.
The trouble is that the more you segment the more data there is to hold in memory, and the more there is to process on each wave.
The process of trying to optimise the memory structures needed to hold, and retrieve segmentation data is usually called 'dynamic segmentation'.
I'm not going to get into it but there are several ways to do this using binary tree like structures, but all of those mechanisms massively increase your codebase.
Perhaps the easiest way is to create multidimensional arrays that are pre-optimised based on distance, and to pick your segmentation strategies well.
If a segment is making little difference, then don't use it.

As Robocode evolved, people began to realise that they could confound aim factor targetting by maintaining a mirror set of statistics.
So you would calculate a wave when the enemy fires at you, note when you were hit, and make sure you were in a different 'bin' next time.
The process of doing this causes an averaging of the enemy statistics making aim factor targetting nearly useless. This movement strategy is discussed in the movement sections below.

AI

Statistical analysis techniques like "segmented aim factor targetting" is the foothills of a very large mountain. At the top of the mountain is probably "Neural Net" prediction.
You create a neural net with input/output nodes that map to your bins and segments. Each wave causes the net to recalculate it's weights. Hopefully eventually the net will 'learn' how your enemy moves. Of course this has been tried many times, but one of the things that confounds this approach is length of robocode rounds. A net simply hasn't got enough time to stabalise.

targeting summary - some simple advice
  • The closer you are the more chance you have of hitting the enemy. Choose close targets wherever you can. Close the gap whenever possible.
  • The slower the enemy is moving the more chance you have of hitting it. Choose slow or stationary targets wherever you can.
  • There's a limit to bullet power. If you have far more health than your opponent close the gap because there is no way they can make back the difference.
  • The world is a big place full of clever people. You will not win every battle. Your realistic aim is simply to be competitive.

movement

There are many different movement strategies:

  • Being still until hit then moving to another location.
  • Orbiting the permiter of the game area
  • Driving directly at the opponent (ramming)
  • Random movement
  • Elliptical or deliberately complex cyclical movement

But the two main movement strategies are known as Anti-Gravity and Wave Surfing.

Anti-Gravity

If you're in a round with multiple opponents (melee) then a neat strategy is to simply move proportionally (based on their distance and other factors) away from all other opponents, and the walls. This means that your robot will usually be constantly moving, based upon the seemingly random movement of your multiple enemies. This strategy is very hard to predict, but it also has several pitfalls. Your bot will tend to almost always be in a corner or the centre of the battlefield. You can mitigate this by adding simulated enemies at strategic locations (the corners and centre etc.) You can also add simulated enemies (with a short lifespan) at any location you are hit at, causing you to move away from 'hot' areas. All of this requires you to keep a record of all enemies you have sighted, and to keep that record updated with recent sightings.

The standard technique is to use 'gravity vectors'. So each robot exerts a repelling force proportional to it's distance from you. You then sum the forces which produces a vector that you can then drive to.
In practice this leaves you unable to predict the exact cartesian coordinates that you need to drive to. You cannot 'visualise' your destination.
Worse it leaves you with the problem of 'wall smoothing' (calculating whether your current course is going to collide with the wall)

A slightly neater way is to move to the spot in the battlefield which represents the 'emptiest' area. So if the battlefield is 1000*1000, and our enemy is at x=800,y=200, then a candidate x location for us might be x=400,y=600, in the middle of the largest x and y coordinate empty space. So if we looped over all of our enemies and calculated the average empty space, we have a tangible x,y cartesian coordinate to drive to.
Below is an example :

	
double x=0,y=0;
for(Enemy enemy:e) {
	x += ((100-((enemy.pos.x/FIELD.getMaxX())*100))/100) * FIELD.getMaxX();
	y += ((100-((enemy.pos.y/FIELD.getMaxY())*100))/100) * FIELD.getMaxY();				
}				
x/=e.size();y/=e.size();			
Point2D.Double drivingTo = new Point2D.Double(x,y);
if (range(mePos,drivingTo)<10||!FIELD.contains(drivingTo)) return;			
double goAngle = Utils.normalRelativeAngle(Math.atan2(x - mePos.x, y - mePos.y) - getHeadingRadians());
setTurnRightRadians(Math.tan(goAngle));
setAhead(Math.cos(goAngle)*Math.sqrt((mePos.x-x)*(mePos.x-x)+(mePos.y-y)*(mePos.y-y)));	

The actual drive code is credited to the author of a bot named 'Seraphim'. I like it because it's simple, and effective.

This method keeps you away from the walls, but that does mean (as mentioned above) that you'll find yourself generally in the centre, which is a bad place to be in melee.
To avoid this you can drop strategic 'fake' enemies into your averaging calculation etc. Or you can adjust the averaging calculation to put you closer to the walls.

Wave Surfing

If you are 1v1, you can guess whether your enemy has fired at you by monitoring their energy. If they loose energy and you haven't hit them, then they have fired at you. From the energy loss you can guess the speed of their bullet. But you have no way of knowing for certain what direction it's travelling in. You can of course guess. You know where you were when the enemy fired, if they're firing straight at you then the bullet will be moving from their current location, to your current location. Making a precise prediction for the vector (angle and velocity) of enemy bullets works well against the simplest bots that use dead on targetting ("Walls" or "Corners" etc.) But complex enemies are unlikely to be firing at where you are now, they'll be firing at where they think you'll be when the bullet reaches you. The trick then, is to guess what their guess is, and not be there :)

As mentioned in the "aim factor" section above, it is possible to use the same technique (segmentation) to avoid enemy bullets.

Wave surfing usually also involves trying to avoid a wave touching you at all. Obviously the wave is going to pass you at some point, but the trick to surfing is in choosing the point at which you let the waves pass you. This technique produces a visually recognisable movement style. You can usually spot a wave surfer pretty easily from it's general movement.

Other strategies

d support to implem
Ramming

If you calculate that you have far more energy than your opponent you can simply ram them and fire with maximum power. Ramming your opponent may not be as easy as you'd think though, they will usually try to avoid you in this situation. In which case you may not be able to simply drive straight to their last spotted location.

The best approach strategy often depends upon your firing algorithm. some firing strategies will cause your enemy to virtually freeze on the spot, making your approach much easier (though you should still avoid a straight line approach!), others will simply avoid you causing you to spiral around each other. This strategy sometimes works because aim factor guns and wave surfing movements have not had time to beat your linear targetting and tangential approach, meaning you hit them more than they hit you during approach, and are on them and firing with maximum power before they can move. You're not going to beat many megabots with this approach, but it'll beat a surprising number of simpler wave surfers. If you try this on SandboxDT it'll crush you like an empty drinks can.

helpful code

One of the things that is constantly useful is the ability to maintain 'rolling averages'.
Below is a class (taken from my utils project) that keeps a rolling average of 'double' numbers.
I found this method on the internet somewhere, removed the generics and the comments (to shrink for robocode) and now I cannot remember who's code this was.


public static class MovingAverage {
	private final Queue<java.lang.Double> window = new LinkedList<java.lang.Double>();
	private final int period;
	public long count = 0;
	private double sum;
	private double min = java.lang.Double.MAX_VALUE;
	private double max = java.lang.Double.MIN_VALUE;
	public MovingAverage(int period) {this.period = period;}     
	public void put(double num) {
		count++;
		if (num<min) min = num;
		if (num>max) max = num;
		sum += num;
		window.add(num);
		if (window.size() > period) {
			sum -= window.remove();
		}
	}
	public long getCount() {return this.count;}
	public double getMax() {return max;}
	public double getMin() {return min;}    
	public double getAverage() {
		if (window.isEmpty()) return 0;
			return sum / window.size();
	}            
}   

memory

There is a special category of robot that can persist information about enemies it has previously encounted to disk.

But when you get into that category, you're up against genetic algorithms and neural net bots. It also doesn't confer as much of an advantage as you might think.

However, it is very useful (especially if you're using aim factor guns) to keep information in memory between rounds in a battle.

You can do this either by making a singleton, or simply by not destroying (or recreating on round start) the important objects of your robot.