Image Transcription: Code
bool is_prime(int x) return false; }
[Beneath the code is a snippet of console output, as follows:]
test no.99989: passed test no.99990: passed test no.99991: failed test no.99992: passed test no.99993: passed test no.99994: passed test no.99995: passed test no.99996: passed test no.99997: passed test no.99998: passed test no.99999: passed 95.121% tests passed
I am a human who transcribes posts to improve accessibility on Lemmy. Transcriptions help people who use screen readers or other assistive technology to use the site. For more information, see here.
good human
Why not just test all even numbers greater than 2? It covers infinite numbers and passes 100% of the time.
It’s important to test edge cases.
Wow, a neural network!
How long would this have to run for it to round up to 100%?
A few calculations:
- There are 9592 prime numbers less than 100,000. Assuming the test suite only tests numbers 1-99999, the accuracy should actually be only 90.408%, not 95.121%
- The 1 trillionth prime number is 29,996,224,275,833. This would mean even the first 29 trillion primes would only get you to 96.667% accuracy.
- The density of primes can be approximated using the Prime Number Theorem:
1/ln(x)
. Solving99.9995 = 100 - 100 / ln(x)
for x givese^200000
or7.88 × 10^86858
. In other words, the universe will end before any current computer could check that many numbers.
This is a really fun question and now I’m nerd sniped
You are joking, but this is exactly what happens if you optimize accuracy of an algorithm to classify something when positive cases are very few. The algorithm will simply label everything as negative, and accuracy will be anyway extremely high!
This is also why medical studies never use accuracy as a measure if the disorder being studied is in any way rare. Sensitivity and specificity or positive/negative likelihood ratios are more common