Test thresholds define how many records trigger a "failed" test.
For example, a custom test by default will fail if more than 0 records are returned. The test threshold is set to high is bad greater than 0. If the dataset should normally be empty and for some reason it has some records, it will fail the test.
Thresholds on columns work on either the values themselves or the number of records and it depends on the type of test. In a compare test, If a test scans 100 records and decides that 7 are different, this will fail the test. The default threshold is greater than 0 is a failure. You can define an absolute number of records that will trigger a warning or failure, or use a percentage of the total records. If you pick 10 as the threshold number, then this case would not fail.
There is also ability to see the fields being compared and set those thresholds. By default, each field is set to have a two ways "center is good" equal to 0 threshold. Each comparison must have a 0 difference, else there is a failure of that row.
The two ways setting means that the difference could be less than or greater than the value used as the threshold. For greater than differences to fail, or only less than differences to fail, set the one way threshold. The difference is based on the master dataset which is defined in the compare rule.
The thresholds on fields define whether or not each record fails. The thresholds on the test define whether or not the test fails based on how many records failed.