Frequently Asked Questions

When you contribute to State of K, you should follow two principles: seek the truth and treat human beings with dignity. These principles are of equal importance.

Seek the truth

Don't try to promote a particular conclusion or ideology. If you ask a question and do not find studies that lead to a conclusion you desire, don't manipulate the list in the hopes you can promote your viewpoint. In the same vein, don't edit someone else's list so the answer matches your opinion. If you remain skeptical after building/editing a list, explore your skepticism by building a list for a related question. See below on "related questions to consider". Attempting to manipulate lists is also a waste of time as the platform is designed to correct this kind of nonsense.

Treat human beings with dignity

Don't ask questions in a manner that encourages dislike or disrespect toward human beings. For example, in a question about immigration laws, refer to "undocumented immigrants" rather than "illegal immigrants" or "illegals" because the latter terms are dehumanizing. Make an effort to ask your question in a way that respects humanity.

State of K is a platform for answering questions on which scholarly research can be performed. This research is performed by gathering and analyzing quantitative and qualitative data. Some questions are appropriate for State of K; others are not.
Not appropriate questions for State of K
"Where can I buy Taylor Swift's latest album?"
This is a factual question that doesn't require gathering any data. Of course, this doesn't mean that the question isn't worth answering...
"How tall is Mt. Everest?"
On the one hand, this is an empirical question, because you can measure the height of a mountain. But it's not the kind of thing suited for State of K because the answer is a simple fact.
"What is the history of the Etruscans?"
This question is far too broad.
"Are soda taxes good for the public?"
This is getting closer, but the term "good" is too ambiguous to be the subject of empirical research. Good in what way?
Appropriate questions for State of K
"Do soda taxes reduce soda consumption?"
This is a good question for State of K because it focuses on a specific outcome.
"Do formula-fed babies sleep longer than breast-fed babies?"
Same as above.
"What explains the emergence of the doctrine of judicial review?"
This question list would comprise studies that provide different explanations for the emergence of judicial review.
"Why do some parents choose not to vaccinate their children?"
This question can be answered by studies that surveyed parents.

Avoid broad questions. Ask several narrow questions instead.

We often want to answer a broad question such as: "do soda taxes work?" or "should I support or oppose soda taxes?". The best way to answer these questions is to build lists for several, more specific questions that each examine a single consideration in making your decision.

do soda taxes work?
do soda taxes increase the price of soda?
do soda taxes reduce soda consumption?
do soda taxes reduce childhood obesity?

Only add studies that directly address your question.

Sometimes it can be hard to distinguish between studies that directly address your question and studies that are just on the same general topic. Obviously, the title of the study is your first clue, but most of the time you'll need to check the abstract to really know.

If you have access to the full text, skim that as well. A study that, judging from its title, seems like it's answering only a broad question from (e.g. "How Voter ID Laws Reduce Voter Turnout") may in fact have tested other, more specific questions (e.g. do voter ID laws reduce turnout among racial minorities or among low-income voters). If you were gathering studies for a question like "do voter ID laws reduce turnout among disadvantaged groups?", that study would be relevent, even though it wouldn't seem so from reading just its title.

Of course, we understand that the full text isn't often freely available, but we strongly encourage you to read the abstract of a study, which is almost always available either directly on State of K or via a link.

Here's an example of distinguishing between studies that directly address a question and those that are just on the same general topic. Say your question is: "Why do some parents choose not to vaccinate their children?"

Examples of studies that directly address this question
  1. Sense & sensibility: Decision-making and sources of information in mothers who decline HPV vaccination of their adolescent daughters
  2. To Vaccinate or Not?: Parents’ Stories
  3. HPV Vaccine for Sons: Do Parents Who Also Have Daughters Think Differently?
Examples of studies that are on the same topic, but don't directly address this question
  1. How parents make decisions about their children's vaccinations
  2. The title suggests that this study is relevant, but if you read the abstract, you'll see that the study doesn't investigate the reasons why parents don't vaccinate their kids.
  3. "Is cancer contagious?": Australian adolescent girls and their parents: making the most of limited information about HPV and HPV vaccination.
  4. This study explores parental knowledge (as well as the knowledge of adolescent girls) of the HPV vaccine. It doesn't explore why parents don't vaccinate their kids.
*Note that this is a broad question. A better question to ask on State of K would focus on a specific type of vaccination, such as measles or HPV.
Only add primary studies or literature reviews.
A primary study is a factual account of a study written by a person who was part of the study. Secondary sources analyze or interpret primary research. Don't include secondary sources such as newspaper articles or "alerts" that bring attention to a study. Just add the study itself. The one exception are literature reviews, which are valuable surveys of existing literature. Definitely add any literature reviews you find.
Avoid studies that are forecasts, simulations, or predictions.
When a public policy is proposed, researchers often publish studies predicting the effects. If the policy hasn't been implemented yet, it's fine to build a list full of these forecast studies. But once a policy has been implemented and studies based on real data are available, avoid adding forecast studies to your list.


The best way to answer your question is to gather all of the studies that directly examine your question. But how do you know that you have gathered all the relevant studies? The short answer is: you won't. But here are some helpful rules of thumb.

  1. For each search query you use, add all relevant studies until you see two pages of results without any relevant studies.
  2. Enter different search queries. For example, try "soda taxes consumption" first. As you read study titles/abstracts, you may discover other relevant terms you can use as queries such as "sugar-sweetened beverage tax". Keep brainstorming new terms.
  3. Once you think you have found all relevant studies, check the recommended studies, which are generated based on the studies in your list, and add any relevant recommendations.

Do you ever see a news headline or study and think that it might lead readers to a conclusion that you are skeptical of? Or perhaps you want to bring attention to a dimension of an issue that is different from the one the headline or study focuses on? This is what "related questions to consider" are for.
Say you find a study list for:
Do voter ID laws reduce voter fraud?
If you are skeptical of such laws, submit something like:
Do voter ID laws reduce voter turnout among citizens?
If you are worried about voter fraud, submit something like:
How prevalent is voter fraud?
If you are neutral, but curious, submit something like:
Do voter ID laws increase trust in democracy?

You may come across a study that you know has been critiqued or wholly debunked. Use State of K to submit that critique so whenever that study appears on our platform, the critique will appear as well.
Example: Are presidential democracies more likely to be become dictatorships than parliamentary democracies?
Presidentialism, Parliamentarism, and Democracy
AUTHOR: Jose Antonio Cheibub
PUBLISHED: 2007 by Cambridge University Press
Measuring the Presidential Risk Factor: A Comment on Cheibub’s Presidentialism, Parliamentarism, and Democracy
AUTHOR: William C. Terry
PUBLISHED: 2008 in Democracy and Its Development

Each study in a list for a question that can be answered "yes or no", can be assigned one of 6 labels. These labels are: "yes", "no", "mixed results", "insufficient evidence", "could not identify" and "no data".

The label assigned to a study represents the answer that that specific study gave to a question. For example, if one study found that soda taxes do reduce soda consumption, it would receive a "yes" label, while a study that found that soda taxes did not reduce soda consumption would receive a "no" label.

The label "mixed results" means that a study found some evidence to indicate that the answer to the question is "yes" and some evidence to indicate that the answer is "no". This label is often applied when a study uses two or more proxies to study the same phenomenon (e.g. firearm sales figures and self-reported firearm ownership rates as proxies for the prevalence of firearms) and the proxies yield different results when looking for correlations with another phenomenon (e.g. firearm-related deaths). Alternatively, the label may be applied if the phenomenon under study (e.g. whether breast milk improves cognitive function) is true for one group, but not another (e.g. true for girls, but not for boys).

The label "insufficient evidence" means that a study found there was insufficient evidence to reach a conclusion regarding the question.

The label "couldn't identify" means that State of K wasn't able to identify the study's response to the question based on the information that was available. This label is often applied when the person creating the list adds a study that seems like it directly examines the question, but it isn't clear from the title or abstract how the study answers the question and the full text of the study isn't accessible.

The label "no data" means that no one has entered a label yet.

Another set of labels that studies can get are: literature review and highly regarded source. These labels are assigned regardless of whether the question that the study examines is a yes-or-no question.

Answers only appear for questions that are capable of being answered yes or no.

On the home page and when you search for questions, you'll see results in the following format:

Do soda taxes reduce soda consumption?
20 Studies (2011 to 2019)
For yes/no questions, the colors correspond to the answer that each study gives to the question.
Insufficient evidence to reach a conclusion Mixed results No Yes
The space that each color occupies represents the percentage of that response. For example, if there are 20 studies that examine a question and you see half red and half green, that means 10 studies answered "no" to the question and 10 studies answered "yes".
For other questions, the colors correspond to information about each study.
Literature review Highly regarded source No info to report on study source

Many of the studies that are gathered on State of K are published in privately owned journals. We can show you study titles, authors, publications and sometimes abstracts, but in many cases you would have to purchase the study or subscribe to a journal through an institution to read the study itself. If a study is freely available, we try to point you to it, but our service is not to provide access to full text.

All labels of "highly regarded source" are assigned by State of K. As applied to journals, the label is assigned to the top 20 journals (as measured by the h-index) in various subcategories as classified and reported by Google Scholar. As applied to NGOs, the label is assigned to US NGOs ranked by the TTCSP Global Go To Think Tank Index Reports.

Note that the information contained in a source that is labelled "highly regarded" is not necessarily more accurate than information contained in a source without that label.

We love feedback. Click the link titled "Suggest a Feature". This link is on your dashboard. You can get to your dashboard by clicking your username on the top right when you are logged in.

Generally speaking, no. If there are studies based on real data, add those studies and ignore studies based purely on mathematical models. On the other hand, if, for example, there is a policy proposal or a policy was recently enacted but no data on its progress exists yet, then adding studies based on models is fine. Just try to phrase the question in a way that indicates you are asking about potential effects.