Rating the intelligence of artificial systems is important for measuring progress in scientific and engineering methods. Unfortunately, currently there is no universal agreement about what constitutes an intelligent system, and how to measure such a system’s intelligence.
This study will attempt to quantify and explain the progress in the RoboCup competition, of which one of the premises is to advance the development of intelligent robotic systems; therefore the idea is that it may be reasonable to extend the results to the entire field of robotics.
In order to quantify the progress a method for rating human chess players is adapted to evaluate the robotic team’s competence over the years. To rationalize the results we will simultaneously analyze articles written within the competition in order to understand the major points of interest, breakthroughs, etc.
The results indicate yearly improvements of a varying degree as to the capabilities of the participating teams.
This paper continues a previous research that focused on measuring the progress in the robotic technologies used in the “Small Size” (F180) RoboCup competitions. The previous research was conducted based on data from competitions between 1998-2002, and the current research extends the data to include the competitions between 2003-2012.
While in the previous research it was suggested that most of the progress was due to hardware improvements (e.g., better sensors and mechanical devices), we attribute later progress to higher level game strategies (e.g., ball pass-shoot strategies), some of which were not previously feasible without the aforementioned hardware improvements.