420 likes | 596 Views
HKOI 2005 Training. New Problem Types: Interactive and Open Test Data. Roadmap. Interactive Tasks Partial Scoring Open Test Data Tasks. The Traditional Problem Type. Read input, write output Input is static It does not change according to your output
E N D
HKOI 2005 Training New Problem Types: Interactive and Open Test Data
Roadmap • Interactive Tasks • Partial Scoring • Open Test Data Tasks
The Traditional Problem Type • Read input, write output • Input is static • It does not change according to your output • Program’s output is compared with judge’s output, or another program (judging program) is used to check its correctness • That’s a must, what else can it be?
Interactive Tasks • Input is dynamic • Successive input items depend on your previous output items • Forced to make output before you can continue reading from input • “Interactive” doesn’t mean the process of programming is interactive!
Example: Guess-the-number • My lucky number is an integer between 1 to 5, guess it • Each time you make a guess (an integer), I will reply with either one of: • Too big • Too small • Correct
Example: Guess-the-number • You: 1? Me: Too small. • You: 2? Me: Too small. • You: 4? Me: Too big. • You: 5? Me: Too big. • You: 3? Me: Correct.
Interactive Tasks • In the judging environment, you are the submitted program • Note that input varies to suit your output • How can the judge achieves this effect? • Another program! • In fact there is another way to achieve interactivity • A library!
Roadmap • Interactive Tasks • Interactivity via Standard I/O • Interactivity via Libraries • Partial Scoring • Open Test Data Tasks
Interactivity via Standard I/O • Your program reads from standard I/O as in a traditional tasks • However, at some points, your program must stop reading, produce output, and go on to read • Just keep on reading may lead to errors • A judging program reads your output and produces suitable input for you
Interactivity via Standard I/O Too small Too small Correct Submitted program Judging program Standard input 1 2 3 Standard output
Interactivity via Standard I/O • This type of interactivity first appeared in IOI1995 • I/O are much like the traditional way, but pay attention to the order of input and output • When to read? When to write? Read the problem description carefully
Interactivity via Libraries • A library is a collection of subprograms used to develop software (Wikipedia) • To use a library, • In Pascal: uses somelib; • In C/C++: #include “somelib.h” • The problem description should contain instructions on using the provided library • But you should learn it now
Function Declarations • Pascal examples: • procedure myproc(x: integer, var c: char); • function myfunc(x, y: integer): integer; • C++ examples: • void myproc(int x, char &c); (Not in C) • int myfunc(int x, int y); • The declaration and meaning of each library functions provided to you will be shown in the problem description
Interactivity via Libraries • Recall the game of Guess-the-number • Suppose we are given a library guesslib with one function: • Pascal: function guess(x: integer): integer; • C/C++: int guess(int x); • guess takes an integer parameter, your guess, and returns 1 for “Too big”, -1 for “Too small”, 0 for “Correct”
Interactivity via Libraries • The implementations of the library functions will NOT be given to you • Sometimes you are also required to read from standard input or an input file for initial data • Always read the problem description carefully • For your information, this type of interactivity first appeared in IOI1996
Hands-on • Guess-the-number • Celebrity
Interactive Tasks • We have discussed two methods to achieve interactivity • Those are just I/O stuff, may not be related to the problem itself at all • Interactivity gave rise to three new major classes of contest problems • Detection • Two-person games • Online problems
Roadmap • Interactive Tasks • Detection • Two-person Games • Online Problems • Partial Scoring • Open Test Data Tasks
Detection • Uncover facts using a specialized detector • Examples: • Celebrity • Guess-the-number • Usually the detector can be used for a very limited number of times, otherwise the problem will become easy
Detection: Strategies • Bisection or Binary Search • It makes the worst case better • Application: Guess-the-number • Any other?
Two-person Games • The judging program (or the library) and your program play a two player game • Examples: • Tic-tac-toe • Nim • Your score depends on how well your program plays against the opponent (judge)
Two-person Games • Unfortunately, your opponent is usually optimal, that is, you are destined to lose once you make a single wrong move • Common strategies and techniques • Minimax • Something from Game Theory • Ad-hoc • Some well-known or easy-to-derive strategies • Application: Nim
Online Problems • “Online” does not mean “being connected to the Internet”! • A part of the input data is not available until some later time • Usually an online problem has its offline counterpart
Example: Online Sorting • Sort some given integers • Some integers may arrive late • Initially • We have 7, 4, 2, 5 • Bubble-sort them: 2, 4, 5, 7 • Then 3, 8 arrives • We have 7, 4, 2, 5, 3, 8 • Bubble-sort them: 2, 3, 4, 5, 7, 8
Example: Online Sorting • If we use bubble sort, every time some new integers arrive, we need O(N2) time to re-sort the numbers • If we use insertion sort, every time some new integers arrive, we need O(N) time per new integer to re-sort the number • So you can see insertion sort is more “online” than bubble sort
Online Problems • Most of the time an online problem is much harder to solve that its offline counterpart • Examples of online problems: • Mobile Phones (IOI2001) • Path Maintenance (IOI2003)
Roadmap • Interactive Tasks • Partial Scoring • Open Test Data Tasks
The Traditional Scoring Method • For a task, there are some test cases • If your program passes a certain test case, marks for that test case will be awarded to you • For each test case, you either get all or nothing
Partial Scoring • For each test case, you may get something between 0% and 100% exclusive • Why partial scoring? • The output consists of several parts • Sub-optimal solutions • The closer to the optimum, the more marks you can get
Partial Scoring • Measurements of optimality • How close is your answer to the optimal solution • How many queries (detections) have been made before your program produces the correct answer • Sometimes the optimal solution is not even known to the judge • Relative scoring
Relative Scoring • You score depends on how close your answer is to the best known solution • The best known solution may be • The judge’s official solution (which may not be optimal) • The best solution among all submitted solutions
Examples of Partial Scoring • Example problem • Guess-two-numbers: I have two lucky integers 1 ≤ A < B ≤ 100, guess them • You may ask me questions in the form “Is it true that A ≤ X ≤ B?”, where X is an integer. • Maximum score per test case is 10 marks • Partial scoring 1 • 5 marks for each correctly guessed lucky integer
Examples of Partial Scoring • Partial scoring 2 • Let a, b be your output integers and A, B be the correct answer • Your score = max(10-|A-a|-|B-b|, 0) • Partial scoring 3 • If your answer is wrong, 0 marks; else • if number of queries ≤ 10, 10 marks; else • If number of queries ≤ 50, 5 marks; else • 0 marks
Partial Scoring: Gallery • Berry (NOI2003) • Zenith Protected Linked Hybrid Zone (NOI2003)
Roadmap • Interactive Tasks • Partial Scoring • Open Test Data Tasks
Open Test Data Tasks • Official judging test cases must be kept confidential until the end of the contest • The above rule no longer holds after the introduction of open test data tasks • Given a number of test cases, you should submit, for each test case, an output file • First appeared in IOI2001
Characteristics of OTD Tasks • NP-Hard • No fast algorithms have yet been found • Relative scoring • Your solution is compared with an optimal or, more probably, a near-optimal solution • Special cases • Some test cases are actually special cases of the problem which is hard in general
Open Test Data Tasks: Strategies • Manually solve the first cases • Usually the first few test cases are easy enough to solve manually • Look at all test cases • There may be wicked but trivial test cases • Keep the CPU busy • Let your exhaustive search program run in the background
Open Test Data Tasks: Strategies • Make good use of what you have • OS, image editing program, other compilers, etc. • Identify the special cases • Special cases may be easier to deal with • Greedy, randomization, heuristics • Relative scoring does not require an optimal solution!
Open Test Data Tasks • Double Crypt (IOI2001) • XOR (IOI2002) • Reverse (IOI2003) • Polygon (IOI2004) • Another Game of Tetris (NOI2002) • Berry (NOI2003) • Graduate (NOI2004)
Last Words • Interactive and open test data tasks are now quite common in IOI and NOI • They may appear in the coming Team Formation Tests
New Problem Types? • ROBOT (IOI2002 proposed task) • From the contest report: However, ROBOT was in the end rejected… we hope this innovative type of task will be accepted in future IOIs to broaden the IOI task repertoire even further. • Multiple Choice Questions (NOI) • Much like doing school exams... • Question papers and technical terms in Simplified Chinese…