Author: nooffensebut

Abstract

Parents’ annual income lacks statistical significance as a predictor of state SAT scores when additional variables are well controlled. Spearman rank correlation coefficients reveal parents’ income to be a weaker predictor of average SAT scores for each income bracket within each state than parents’ education level as a predictor of average SAT scores for each education level within each state. Multiple linear regression of state SAT scores with covariates for sample size, state participation, year, and each possible combination of ordinal variables for parents’ income, parents’ education, and race shows income to lack statistical significance in 49% of the iterations with greater frequency of insignificance among iterations with higher explained variance. Cohen’s d comparisons of the yearly individual SAT advantage of having educated parents shows a fairly consistently increasing positive relationship over time, whereas similar analysis of the yearly individual SAT advantage of having high-income parents shows variability somewhat coinciding with the business cycle.

Key words: SAT; socioeconomic status; income; education; race

(Update: I made slight stylistic/aesthetic changes to pdf - 6/6/2014)

(Update: Nixed Hampshire et al. Added MacCallum et al. - 6/10/2014)

Back to [Archive] Post-review discussions

**[ODP] - Parents’ Income is a Poor Predictor of SAT Score**

Author: nooffensebut

Abstract

Parents’ annual income lacks statistical significance as a predictor of state SAT scores when additional variables are well controlled. Spearman rank correlation coefficients reveal parents’ income to be a weaker predictor of average SAT scores for each income bracket within each state than parents’ education level as a predictor of average SAT scores for each education level within each state. Multiple linear regression of state SAT scores with covariates for sample size, state participation, year, and each possible combination of ordinal variables for parents’ income, parents’ education, and race shows income to lack statistical significance in 49% of the iterations with greater frequency of insignificance among iterations with higher explained variance. Cohen’s d comparisons of the yearly individual SAT advantage of having educated parents shows a fairly consistently increasing positive relationship over time, whereas similar analysis of the yearly individual SAT advantage of having high-income parents shows variability somewhat coinciding with the business cycle.

Key words: SAT; socioeconomic status; income; education; race

This is a very good paper and provides original insights. Thus, I recommend publication.

I come a little bit late, because when I saw this submission I was thinking what to do about my article on multiple regressions, since lot of people don't understand what it is.

http://humanvarieties.org/2014/06/07/multiple-regression-multiple-fallacies/

As you see the title is provocative that's why I needed a little more time. I think everyone should read it. It's unlikely (~0.00000000001) that I'm wrong.

I believe it might me useful since you talk about regressions. Thus this needs to be well interpreted. For example the following statement

is incorrect, at least in the sense you seem to interpret the "predictors" which are "predictors" in name only, but not in reality.

What regression does it to estimate the direct effect of your independent variable, not the total effect, which is composed of direct and indirect effects. In fact multiple regression cannot disentangle the pattern of indirect effects. The only way to disentangle these indirect effects is to use longitudinal data with repeated measures, with tools such as path analysis or SEM.

This is a fact that most scientists ignore. Indeed, you can see that my blog post is somewhat like a declaration of war. That's why I hesitated to post it.

Now that is said your article has really good quality. i haven't finished yet, I'll come back later.

(Also, if you can write a blog post somewhere to show us how to produce the graphs such as in figure 2a, 2b, figure 3..., that would be nice, i always wanted to do that but don't know what tools to use and what's the procedure.)

http://humanvarieties.org/2014/06/07/multiple-regression-multiple-fallacies/

As you see the title is provocative that's why I needed a little more time. I think everyone should read it. It's unlikely (~0.00000000001) that I'm wrong.

I believe it might me useful since you talk about regressions. Thus this needs to be well interpreted. For example the following statement

β standardized coefficients for each covariate show that parents’ education predicts state SAT scores best, followed by state SAT participation rate.

is incorrect, at least in the sense you seem to interpret the "predictors" which are "predictors" in name only, but not in reality.

What regression does it to estimate the direct effect of your independent variable, not the total effect, which is composed of direct and indirect effects. In fact multiple regression cannot disentangle the pattern of indirect effects. The only way to disentangle these indirect effects is to use longitudinal data with repeated measures, with tools such as path analysis or SEM.

This is a fact that most scientists ignore. Indeed, you can see that my blog post is somewhat like a declaration of war. That's why I hesitated to post it.

Now that is said your article has really good quality. i haven't finished yet, I'll come back later.

(Also, if you can write a blog post somewhere to show us how to produce the graphs such as in figure 2a, 2b, figure 3..., that would be nice, i always wanted to do that but don't know what tools to use and what's the procedure.)

I recommend publication as well.

Question.

In your article, you refer to Everson & Millsap...

Is it really that article here ?

http://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-2004-3-exploring-school-effects-sat-scores.pdf

The kind of stuff they use must be in figures 2/3, no ? That's what I was afraid of. They commit the fallacy of ignoring "model equivalency". They shouldn't do that. They do not even cite MacCallum. So I'll do it.

MacCallum, R. C., Wegener, D. T., Uchino, B. N., & Fabrigar, L. R. (1993). The problem of equivalent models in applications of covariance structure models. Psychological Bulletin, 114, 185–199.

If you want, you can add this reference to your article.

Concerning Sackett (2012) it's another disappointment. Here's what they say...

And here's what the same Sackett (2008) said :

Has he shifted his mind ? I don't believe that. It's more likely there's something wrong with him. The r² is a statistics that should never be used. He knew the problems with r² but still uses it. Well...

Concerning the Hampshire (2012) article, it has problems, and personally I would not cite it (although it's just my opinion, thus it worth little).

http://www.openpsych.net/forum/showthread.php?tid=38

In your article, you refer to Everson & Millsap...

used pathway analysis to show socioeconomic status only had a minor direct association with 1995 SAT scores but acted indirectly through high school achievement and “extracurricular activities” like “computer experiences” and “literature experiences.”

Is it really that article here ?

http://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-2004-3-exploring-school-effects-sat-scores.pdf

The kind of stuff they use must be in figures 2/3, no ? That's what I was afraid of. They commit the fallacy of ignoring "model equivalency". They shouldn't do that. They do not even cite MacCallum. So I'll do it.

MacCallum, R. C., Wegener, D. T., Uchino, B. N., & Fabrigar, L. R. (1993). The problem of equivalent models in applications of covariance structure models. Psychological Bulletin, 114, 185–199.

If you want, you can add this reference to your article.

Concerning Sackett (2012) it's another disappointment. Here's what they say...

In the 2006 national population of test takers, the correlation between SES and composite SAT score was .46. Therefore, 21.2% of variance in SAT scores is shared with SES, as measured here as a composite of mother’s education, father’s education, and parental income. Thus, SAT scores are by no means isomorphic with SES, although the source of the SES-SAT relationship is likely due to some combination of educational opportunity, school quality, peer effects, and other social factors.

And here's what the same Sackett (2008) said :

High-Stakes Testing in Higher Education and Employment: Appraising the Evidence for Validity and Fairness (Sackett 2008)

Prototypically, admissions tests correlate about .35 with first-year grade point average (GPA), and employment tests correlate about .35 with job training performance and about .25 with performance on the job. One reaction to these findings is to square these correlations to obtain the variance accounted for by the test (.25 accounts for 6.25%; .35 accounts for 12.25%) and to question the appropriateness of giving tests substantial weight in selection or admissions decisions given these small values (e.g., Sternberg, Wagner, Williams, & Horvath, 1995; Vasquez & Jones, 2006).

One response to this reaction is to note that even if the values above were accurate (and we make the case below that they are, in fact, substantial underestimates), correlations of such magnitude are of more value than critics recognize. As long ago as 1928, Hull criticized the small percentage of variance accounted for by commonly used tests. In response, a number of scholars developed alternate metrics designed to be more readily interpretable than “percentage of variance accounted for” (Lawshe, Bolda, & Auclair, 1958; Taylor & Russell, 1939). Lawshe et al. (1958) tabled the percentage of test takers in each test score quintile (e.g., top 20%, next 20%, etc.) who met a set standard of success (e.g., being an above-average performer on the job or in school). A test correlating .30 with performance can be expected to result in 67% of those in the top test quintile being above-average performers (i.e., 2 to 1 odds of success) and 33% of those in the bottom quintile being above-average performers (i.e., 1 to 2 odds of success). Converting correlations to differences in odds of success results both in a readily interpretable metric and in a positive picture of the value of a test that “only” accounts for 9% of the variance in performance. Subsequent researchers have developed more elaborate models of test utility (e.g., Boudreau & Rynes, 1985; Brogden, 1946, 1949; Cronbach & Gleser, 1965; Murphy, 1986) that make similar points about the substantial value of tests with validities of the magnitude commonly observed. In short, there is a long history of expressing the value of a test in a metric more readily interpretable than percentage of variance accounted for.

Has he shifted his mind ? I don't believe that. It's more likely there's something wrong with him. The r² is a statistics that should never be used. He knew the problems with r² but still uses it. Well...

Concerning the Hampshire (2012) article, it has problems, and personally I would not cite it (although it's just my opinion, thus it worth little).

http://www.openpsych.net/forum/showthread.php?tid=38

I am in Leipzig, Germany right now, so am pressed for time and don't have access to academic literature.

Good submission. Very thorough paper.

This kind of compositing has been annoying me for some time. It is very arbitrary. As I see it, one should do either or both of the following two:

1) use the variables for the socialeconomic status and do a factor analysis (whichever type: PCA, PAF, maximum-likelihood, ...) and use the general factor. A lack of a general factor means that there is no such thing as the general socioeconomic status. However, one is universally found so this isn't an issue. :)

2) use multiple regression on the desired dependent variable to find the best way to weigh the variables for prediction.

The above methods may give the same results which makes for easy interpretation. :)

Good submission. Very thorough paper.

Sackett et al estimated a correlation between SAT scores from 1995 to 1997 and socioeconomic status of 0.42, explaining 18% of variance. They defined socioeconomic status as a composite that equally weighted father’s education, mother’s education, and family annual income. Their meta- analysis of 55 standardized-test studies, less than half of which were for the SAT, gave the significant correlations of 0.255, 0.284, 0.223, and 0.186 for composite socioeconomic status, father’s education, mother’s education, and family income, respectively, without utilizing multiple linear regression. A similar study by Sackett et al (2012) determined that composite socioeconomic status explained 21.2% of 2006 composite-SAT variance.

This kind of compositing has been annoying me for some time. It is very arbitrary. As I see it, one should do either or both of the following two:

1) use the variables for the socialeconomic status and do a factor analysis (whichever type: PCA, PAF, maximum-likelihood, ...) and use the general factor. A lack of a general factor means that there is no such thing as the general socioeconomic status. However, one is universally found so this isn't an issue. :)

2) use multiple regression on the desired dependent variable to find the best way to weigh the variables for prediction.

The above methods may give the same results which makes for easy interpretation. :)

@menghu1001

“4. Variable x1, controlling for x2 and x3, has a large/weak/null effect on variable Y.

By variable, what is meant conventionally is its full effect, not its partial effect. That’s why the statement is a complete nonsense.”

“…regression with longitudinal data is the only way to prove the causality and by the same token to disentangle the pattern of direct and indirect effects. This is because causality deals with changes, that is, the impact of the changes in variables x1 and x2 on the changes in variable Y. But changes is only possible with repeated measures and only if the variables are allowed to change meaningfully over time…”

“…multiple regression should be used to test whether any variable can be weakened in its direct effect by other independent variables but not to test between them as if they were in competition with each other.”

Thanks for reviewing. I incorporated some of your corrections. I didn’t take out discussion of the suppression effect of race and parents’ education on parents’ income because that would obviously gut the paper, and you haven’t persuaded me that multiple regression is incapable of studying suppression effect. The longstanding statistical concept of suppression effect in regression can be found in the <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2819361/">scientific literature</a> and <a href="http://books.google.com/books?id=xUy-NNnSQIIC&pg=PA1102&lpg=PA1102&dq=%22suppression+effect%22+regression&source=bl&ots=SKCM4auZIS&sig=zPjhDZANYONtIyNbFyYopMgt-e4&hl=en&sa=X&ei=UPKTU9KBJMiVyATb3IK4BQ&ved=0CE0Q6AEwBTgK#v=onepage&q=%22suppression%20effect%22%20regression&f=false">textbooks</a>;. I see that you have a strong preference for SEM and pathway analysis, but I’m not seeing a methodical disproof of suppression effect in multiple linear regression. I did attempt to test variable interaction effects. Apparently, I didn’t do it right, but if my preliminary results were any indication, then the interaction between race and parents’ education was much stronger than other interaction variables. This would have been just one more piece of evidence that parents’ income hardly matters. Perhaps your thinking is also influenced by the issue of multicollinearity, and I agree that is an important issue for underpowered studies. If income somehow has a large indirect effect despite its tiny direct effect, assuming your logic is sound, of which I am unconvinced, then the fact that I have made clear that my methodology was based on multiple regression will not discourage future research using SEM. I am innocent of false advertizing. I think I mostly avoided causal language. I edited my study to use words like “associated” and “predicted,” rather than “caused,” or “made.” All science is vulnerable to postmodernist critique. Even longitudinal observations are vulnerable to the fallacy of post hoc, ergo propter hoc. As you and I noted, SEM is vulnerable to confirmation bias or equivalent models. My use of multiple linear regression is too simplistic, unless you compare it to everything else done so far: the mere correlations cited by university presidents and media personalities and the published studies that assume SES is two parts education, one part income. My imperfect approach resulted in very high R^2 values.

I have noticed that certain methodologies (SEM, GWAS) gain cult-like followings and become gods that demand that we put no other gods before them. You want to make the perfect evidence the enemy of good evidence, but perfect evidence either doesn’t exist, or those who hold it have vested interests in hiding the truth. Everson and Millsap, of the College Board, also feel that multiple regression is outdated, and their use of SEM conveniently divided part of the effects of race into between-school differences, so as to “provide little support and comfort to those critics” who have “attacked high-stakes tests such as the SAT as biased, unfair, or discriminatory.” Thus, SEM let them shoehorn the data into a facile narrative. Racial injustice explains the gap, but it’s not their fault, of course. My study has value precisely because it is DIY with publicly available data, which gives me the freedom to test the assumptions of both the race-based criticisms of the SAT and the attempts to deflect racial controversy by the College Board.

Regarding R^2, I think your criticisms apply more to the studies by Schmidt et al than mine. I am comparing the R^2 values of different regressions to each other, not looking at one R^2 in isolation in order to make it seem small. Since my R^2 values are already so close to one, replacing them with R values would make little difference. However, it would make my presentation of the data inconsistent with the studies I cited. Also, adjusted R^2 is considered more appropriate for multiple linear regression than R^2. Is there such a thing as “adjusted R”?

“write a blog post somewhere to show us how to produce the graphs such as in figure 2a, 2b, figure 3...,”

All graphs were with Matlab using add-on commands.

x=[1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4];

y=[20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100];

z=[1.90E-30 1.80E-55 2.20E-58 2.30E-62 4.80E-62 3.20E-09 6.60E-06 0.053 0.34 0.41 7.80E-23 5.30E-12 0.052 0.083 9.40E-05 2.10E-60 4.30E-40 4.70E-17 2.00E-06 0.083]

tri=delaunay(x, y);

trimesh(tri, x, y, z);

set(gca, 'zscale', 'log')

campos([-6.2462,683.8065,-1.8871])

Figure 3 uses plotyy(x, y1, x, y) to plot the right vertical axis plot. I made the graphs discontinuous, as follows:

% education beta

x=[1 2 3 4 5];

y=[-1.17E-02 1.69E-02 7.59E-02 1.10E-01 1.38E-01];

x=[x,5];

y=[y,NaN];

x=[x,6 7 8 9 10];

y=[y,-1.17E-02 -1.37E-02 1.49E-02 2.81E-02 4.33E-02];

x=[x,10];

y=[y,NaN];

(etc.)

l1 = plot(x,y);

hold on;

“4. Variable x1, controlling for x2 and x3, has a large/weak/null effect on variable Y.

By variable, what is meant conventionally is its full effect, not its partial effect. That’s why the statement is a complete nonsense.”

“…regression with longitudinal data is the only way to prove the causality and by the same token to disentangle the pattern of direct and indirect effects. This is because causality deals with changes, that is, the impact of the changes in variables x1 and x2 on the changes in variable Y. But changes is only possible with repeated measures and only if the variables are allowed to change meaningfully over time…”

“…multiple regression should be used to test whether any variable can be weakened in its direct effect by other independent variables but not to test between them as if they were in competition with each other.”

Thanks for reviewing. I incorporated some of your corrections. I didn’t take out discussion of the suppression effect of race and parents’ education on parents’ income because that would obviously gut the paper, and you haven’t persuaded me that multiple regression is incapable of studying suppression effect. The longstanding statistical concept of suppression effect in regression can be found in the <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2819361/">scientific literature</a> and <a href="http://books.google.com/books?id=xUy-NNnSQIIC&pg=PA1102&lpg=PA1102&dq=%22suppression+effect%22+regression&source=bl&ots=SKCM4auZIS&sig=zPjhDZANYONtIyNbFyYopMgt-e4&hl=en&sa=X&ei=UPKTU9KBJMiVyATb3IK4BQ&ved=0CE0Q6AEwBTgK#v=onepage&q=%22suppression%20effect%22%20regression&f=false">textbooks</a>;. I see that you have a strong preference for SEM and pathway analysis, but I’m not seeing a methodical disproof of suppression effect in multiple linear regression. I did attempt to test variable interaction effects. Apparently, I didn’t do it right, but if my preliminary results were any indication, then the interaction between race and parents’ education was much stronger than other interaction variables. This would have been just one more piece of evidence that parents’ income hardly matters. Perhaps your thinking is also influenced by the issue of multicollinearity, and I agree that is an important issue for underpowered studies. If income somehow has a large indirect effect despite its tiny direct effect, assuming your logic is sound, of which I am unconvinced, then the fact that I have made clear that my methodology was based on multiple regression will not discourage future research using SEM. I am innocent of false advertizing. I think I mostly avoided causal language. I edited my study to use words like “associated” and “predicted,” rather than “caused,” or “made.” All science is vulnerable to postmodernist critique. Even longitudinal observations are vulnerable to the fallacy of post hoc, ergo propter hoc. As you and I noted, SEM is vulnerable to confirmation bias or equivalent models. My use of multiple linear regression is too simplistic, unless you compare it to everything else done so far: the mere correlations cited by university presidents and media personalities and the published studies that assume SES is two parts education, one part income. My imperfect approach resulted in very high R^2 values.

I have noticed that certain methodologies (SEM, GWAS) gain cult-like followings and become gods that demand that we put no other gods before them. You want to make the perfect evidence the enemy of good evidence, but perfect evidence either doesn’t exist, or those who hold it have vested interests in hiding the truth. Everson and Millsap, of the College Board, also feel that multiple regression is outdated, and their use of SEM conveniently divided part of the effects of race into between-school differences, so as to “provide little support and comfort to those critics” who have “attacked high-stakes tests such as the SAT as biased, unfair, or discriminatory.” Thus, SEM let them shoehorn the data into a facile narrative. Racial injustice explains the gap, but it’s not their fault, of course. My study has value precisely because it is DIY with publicly available data, which gives me the freedom to test the assumptions of both the race-based criticisms of the SAT and the attempts to deflect racial controversy by the College Board.

Regarding R^2, I think your criticisms apply more to the studies by Schmidt et al than mine. I am comparing the R^2 values of different regressions to each other, not looking at one R^2 in isolation in order to make it seem small. Since my R^2 values are already so close to one, replacing them with R values would make little difference. However, it would make my presentation of the data inconsistent with the studies I cited. Also, adjusted R^2 is considered more appropriate for multiple linear regression than R^2. Is there such a thing as “adjusted R”?

“write a blog post somewhere to show us how to produce the graphs such as in figure 2a, 2b, figure 3...,”

All graphs were with Matlab using add-on commands.

x=[1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4];

y=[20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100];

z=[1.90E-30 1.80E-55 2.20E-58 2.30E-62 4.80E-62 3.20E-09 6.60E-06 0.053 0.34 0.41 7.80E-23 5.30E-12 0.052 0.083 9.40E-05 2.10E-60 4.30E-40 4.70E-17 2.00E-06 0.083]

tri=delaunay(x, y);

trimesh(tri, x, y, z);

set(gca, 'zscale', 'log')

campos([-6.2462,683.8065,-1.8871])

Figure 3 uses plotyy(x, y1, x, y) to plot the right vertical axis plot. I made the graphs discontinuous, as follows:

% education beta

x=[1 2 3 4 5];

y=[-1.17E-02 1.69E-02 7.59E-02 1.10E-01 1.38E-01];

x=[x,5];

y=[y,NaN];

x=[x,6 7 8 9 10];

y=[y,-1.17E-02 -1.37E-02 1.49E-02 2.81E-02 4.33E-02];

x=[x,10];

y=[y,NaN];

(etc.)

l1 = plot(x,y);

hold on;

I cannot access your link toward google books. However... MacKinnon et al. (2000) said stuff like that,

Hmm... well, I think I'm aware of that, but I'm not completely sure what's your point. See below.

MR functions the same way as SEM. the regress coeff is nothing more than the portion of total effect in SEM that is labeled "direct path". Therefore, there is no possibility to say which one has the strongest effect. As I said in my article, the most relevant part is the sentence that says that the regress coeff depends solely on the strength of the direct effect. I'm not saying MR is useless, because if the addition of predictor does not weaken the variable, say, income, then you are confident that most of its effect is direct. Thus, you don't need to posit assumptions regarding the causal pathways among the predictors, i.e., the kind of thing you're doing when using SEM.

Not exactly. I'm not a proponent of SEM+cross-sectional data, but a proponent of SEM+longitudinal data. Unfortunately, the latter case is the exception, not the rule. Most people who use SEM don't know what they are doing. Everson & Millsap is another illustration of what I say.

If you're not convinced, you must explain me why and where I am wrong. Furthermore, SEM and MR have more similarity than what you seem to believe. SEM disentangles direct and indirect effects, unlike MR. Because of that, in SEM you must make assumptions about which causal model is more likely than others, based either on theory or data (if you use longitudinal data). My point is that MR is just like SEM, with the pattern of indirect effect removed. And my question is : why would you think the regress. coeff. are necessarily comparable ? MR can give you, indeed, an indication of a suppression effect, based on what MacKinnon et al. said above. I already knew that, but the most relevant question should be : what's the total effect of each of your independent var. ? Can you calculate them ? My point is that you can compare the predictors only if you can calculate for them their total effect.

I have made modification to my articles, i'm not sure if you have read it yet but I said : " What is even more insidious is that the direct and indirect effects can be of opposite signs (Xinshu, 2010). Thus, in multiple regression, not only it is impossible to judge the total effect of any variables but also it is impossible to judge the direction of the correlation." the authors made it clear that direct and indirect effects are not necessarily complementary but there can be competitive mediation, which happens when direct and indirect paths have opposite signs.

I know it's a difficult question since the researchers will not recognize it. I have email lot of authors (~20) but only 3 responded. 2 of them thanked me, the other did not understand what I say, "there is no controversy", "I have not compared the predictors among them" stuff like that. It's been 3-4 days now. This is not encouraging, especially because my stats in my blog is extremely low, which mean they have not even bothered to click on the link toward my article.

There is a last comment I received from Paul Sackett, although he did not read my articles that I suggested him to do. He only says he agrees with me that we must not use r² as measure of effect size, but we can report it to say it represent the "amount of variance" etc., and nothing more.

I saw that already, but you haven't said explicitly you did not consider them as measure of effect size, because most people see them as effect size. Each time I see someone reporting r² I expect him to interpret it the wrong way unless he made it very explicity that r² is not an effect size and should not replace r.

Agreed. But I think it's even worse than that. If you use SEM, it takes you only few minutes to spot that fallacy. Simply by reversing the arrows or drawing a "covariance path" instead of "causation path" between the independent var., will immediately let you know the full extent of the fallacy. Researchers must know that. I can't believe they have never noticed it before. Thus, my assumption is that they are doing something they know is not robust to equivalent models. That makes me really upset. Lot of studies using MR and SEM is garbage.

Thanks, I'll try that. I hope Matlab is free, because if not...

[hr]

i would like to request something important. I don't have much free time, probably you don't either. Each time I review some articles, when the author makes changes, he has never precised WHICH part of the article is modified or what has been added. I must constantly search for the addition and modification, which causes me to re-read the article entirely once again. I prefer not to waste my time searching for this, for that...

So, I suggest either 2 options :

1. the author copy-past in the forum which portion is added or modified.

2. the author marks in color (blue or red) the portion that is added or modified.

What do you think ?

The signs and magnitudes of the τ (note : total effect in model with 2 var) and τ’ (note : direct effect in model with 3 var) parameters indicate whether or not the third variable operates as a suppressor. If the two parameters share the same sign, a τ’ estimate closer to zero than the τ estimate (a direct effect smaller than the total effect) indicates mediation or positive confounding, while a situation in which τ is closer to zero than τ’ (a direct effect larger than the total effect) indicates suppression (or inconsistent mediation or negative confounding, depending on the conceptual context of the analysis). In some cases of suppression, the τ and τ’ parameters may have opposite signs.

Hmm... well, I think I'm aware of that, but I'm not completely sure what's your point. See below.

I see that you have a strong preference for SEM and pathway analysis, but I’m not seeing a methodical disproof of suppression effect in multiple linear regression

MR functions the same way as SEM. the regress coeff is nothing more than the portion of total effect in SEM that is labeled "direct path". Therefore, there is no possibility to say which one has the strongest effect. As I said in my article, the most relevant part is the sentence that says that the regress coeff depends solely on the strength of the direct effect. I'm not saying MR is useless, because if the addition of predictor does not weaken the variable, say, income, then you are confident that most of its effect is direct. Thus, you don't need to posit assumptions regarding the causal pathways among the predictors, i.e., the kind of thing you're doing when using SEM.

I see that you have a strong preference for SEM and pathway analysis

Not exactly. I'm not a proponent of SEM+cross-sectional data, but a proponent of SEM+longitudinal data. Unfortunately, the latter case is the exception, not the rule. Most people who use SEM don't know what they are doing. Everson & Millsap is another illustration of what I say.

If income somehow has a large indirect effect despite its tiny direct effect, assuming your logic is sound, of which I am unconvinced, then the fact that I have made clear that my methodology was based on multiple regression will not discourage future research using SEM.

If you're not convinced, you must explain me why and where I am wrong. Furthermore, SEM and MR have more similarity than what you seem to believe. SEM disentangles direct and indirect effects, unlike MR. Because of that, in SEM you must make assumptions about which causal model is more likely than others, based either on theory or data (if you use longitudinal data). My point is that MR is just like SEM, with the pattern of indirect effect removed. And my question is : why would you think the regress. coeff. are necessarily comparable ? MR can give you, indeed, an indication of a suppression effect, based on what MacKinnon et al. said above. I already knew that, but the most relevant question should be : what's the total effect of each of your independent var. ? Can you calculate them ? My point is that you can compare the predictors only if you can calculate for them their total effect.

I have made modification to my articles, i'm not sure if you have read it yet but I said : " What is even more insidious is that the direct and indirect effects can be of opposite signs (Xinshu, 2010). Thus, in multiple regression, not only it is impossible to judge the total effect of any variables but also it is impossible to judge the direction of the correlation." the authors made it clear that direct and indirect effects are not necessarily complementary but there can be competitive mediation, which happens when direct and indirect paths have opposite signs.

I know it's a difficult question since the researchers will not recognize it. I have email lot of authors (~20) but only 3 responded. 2 of them thanked me, the other did not understand what I say, "there is no controversy", "I have not compared the predictors among them" stuff like that. It's been 3-4 days now. This is not encouraging, especially because my stats in my blog is extremely low, which mean they have not even bothered to click on the link toward my article.

There is a last comment I received from Paul Sackett, although he did not read my articles that I suggested him to do. He only says he agrees with me that we must not use r² as measure of effect size, but we can report it to say it represent the "amount of variance" etc., and nothing more.

Since my R^2 values are already so close to one, replacing them with R values would make little difference.

I saw that already, but you haven't said explicitly you did not consider them as measure of effect size, because most people see them as effect size. Each time I see someone reporting r² I expect him to interpret it the wrong way unless he made it very explicity that r² is not an effect size and should not replace r.

As you and I noted, SEM is vulnerable to confirmation bias or equivalent models.

Agreed. But I think it's even worse than that. If you use SEM, it takes you only few minutes to spot that fallacy. Simply by reversing the arrows or drawing a "covariance path" instead of "causation path" between the independent var., will immediately let you know the full extent of the fallacy. Researchers must know that. I can't believe they have never noticed it before. Thus, my assumption is that they are doing something they know is not robust to equivalent models. That makes me really upset. Lot of studies using MR and SEM is garbage.

All graphs were with Matlab using add-on commands.

Thanks, I'll try that. I hope Matlab is free, because if not...

[hr]

**Emil and duxide, if you can read this message...**i would like to request something important. I don't have much free time, probably you don't either. Each time I review some articles, when the author makes changes, he has never precised WHICH part of the article is modified or what has been added. I must constantly search for the addition and modification, which causes me to re-read the article entirely once again. I prefer not to waste my time searching for this, for that...

So, I suggest either 2 options :

1. the author copy-past in the forum which portion is added or modified.

2. the author marks in color (blue or red) the portion that is added or modified.

What do you think ?

I am not reviewing the manuscript, but is it OK to make a simple suggestion? Please spell out the full name before using the SAT and ACT. This is of minor importance, but I have run into confusion with the SAT because it is the same acronym used for the Stanford Achievement Test.

@menghu1001

“Therefore, there is no possibility to say which one has the strongest effect.”

“If you're not convinced, you must explain me why and where I am wrong.”

You think, (correct me if I’m wrong) that R^2 underestimates the true explanatory power of a model, and that MR underestimates the true explanatory power of a variable because it only includes direct effects. So, if my model can achieve an R^2 of 0.942, then adding in those indirect effects could add up to 0.058 to the coefficient of determination, and you want me to think that such a miniscule change would so profoundly transform my model that, in its current form, “there is no possibility to say which [variable] has the strongest effect.” I would say that my study is evidence that you misunderstand MR.

“why would you think the regress. coeff. are necessarily comparable ?”

Since they are not comparable, any semblance to the real world is just a coincidence. Here is a list of coincidences found in my data:

1. Asian race improves math score.

2. Asian race somewhat hurts verbal score.

3. The racial variable always matches the national reports of racial gaps (i.e. always positive except for Asian race with verbal scores).

4. A study that includes all these races, but singles out Black people as the lowest group and Asians as the highest group would be expected to show small effects for Asian race because they score closer to the majority race.

5. Affirmative action exists, and it allows certain minorities to achieve levels of income and especially levels of education beyond what their SAT scores would predict. Therefore, a racial variable should matter more at higher incomes and especially at higher education levels.

6. The higher influence of education compared to income matches Cohen’s d advantage magnitudes.

7. The higher influence of education matches expectations based on high estimated hereditary influence on the SAT and high bivariate overlap with GCTA.

8. Having at least a high school education is such a powerful norm in the US that it is extremely common, so much so that the designation of high school graduate should be pedestrian and uninformative.

9. Students who take the SAT in ACT states probably tend to be smarter than those in ACT states who don’t.

10. SAT participation rates are declining over time.

11. More populated states tend to be SAT states.

12. The Flynn effect is thought to have ceased in the US.

These are all just coincidences, and I’m sure that you could find just as many notions suggested by my data that are contradicted by real-world observations. If not, my study might prove magical forces are at play.

“I hope Matlab is free”

I don’t think so. Torrents are free but illegal and, therefore, strictly forbidden!

@csdunkel

“Please spell out the full name before using the SAT and ACT”

I don’t think they officially stand for anything anymore. The SAT stood for Scholastic Aptitude Test until 1990, then Scholastic Assessment Test until 1993, when it became a pseudo-acronym. ACT stood for American College Testing, but I think it is a pseudo-acronym, too.

“Therefore, there is no possibility to say which one has the strongest effect.”

“If you're not convinced, you must explain me why and where I am wrong.”

You think, (correct me if I’m wrong) that R^2 underestimates the true explanatory power of a model, and that MR underestimates the true explanatory power of a variable because it only includes direct effects. So, if my model can achieve an R^2 of 0.942, then adding in those indirect effects could add up to 0.058 to the coefficient of determination, and you want me to think that such a miniscule change would so profoundly transform my model that, in its current form, “there is no possibility to say which [variable] has the strongest effect.” I would say that my study is evidence that you misunderstand MR.

“why would you think the regress. coeff. are necessarily comparable ?”

Since they are not comparable, any semblance to the real world is just a coincidence. Here is a list of coincidences found in my data:

1. Asian race improves math score.

2. Asian race somewhat hurts verbal score.

3. The racial variable always matches the national reports of racial gaps (i.e. always positive except for Asian race with verbal scores).

4. A study that includes all these races, but singles out Black people as the lowest group and Asians as the highest group would be expected to show small effects for Asian race because they score closer to the majority race.

5. Affirmative action exists, and it allows certain minorities to achieve levels of income and especially levels of education beyond what their SAT scores would predict. Therefore, a racial variable should matter more at higher incomes and especially at higher education levels.

6. The higher influence of education compared to income matches Cohen’s d advantage magnitudes.

7. The higher influence of education matches expectations based on high estimated hereditary influence on the SAT and high bivariate overlap with GCTA.

8. Having at least a high school education is such a powerful norm in the US that it is extremely common, so much so that the designation of high school graduate should be pedestrian and uninformative.

9. Students who take the SAT in ACT states probably tend to be smarter than those in ACT states who don’t.

10. SAT participation rates are declining over time.

11. More populated states tend to be SAT states.

12. The Flynn effect is thought to have ceased in the US.

These are all just coincidences, and I’m sure that you could find just as many notions suggested by my data that are contradicted by real-world observations. If not, my study might prove magical forces are at play.

“I hope Matlab is free”

I don’t think so. Torrents are free but illegal and, therefore, strictly forbidden!

@csdunkel

“Please spell out the full name before using the SAT and ACT”

I don’t think they officially stand for anything anymore. The SAT stood for Scholastic Aptitude Test until 1990, then Scholastic Assessment Test until 1993, when it became a pseudo-acronym. ACT stood for American College Testing, but I think it is a pseudo-acronym, too.

Emil and duxide, if you can read this message...

i would like to request something important. I don't have much free time, probably you don't either. Each time I review some articles, when the author makes changes, he has never precised WHICH part of the article is modified or what has been added. I must constantly search for the addition and modification, which causes me to re-read the article entirely once again. I prefer not to waste my time searching for this, for that...

So, I suggest either 2 options :

1. the author copy-past in the forum which portion is added or modified.

2. the author marks in color (blue or red) the portion that is added or modified.

What do you think ?

There are various ways to do this. One can certainly color things. If one writes in a standard word processor, then one can have it record changes automatically. This is the standard practice I think.

Since I usually write in LATEX this option isn't easily available.

Get Matlab here:

https://torrentz.eu/search?q=matlab

As also noted above, they are not real acronyms anymore. One can refer to them as widely used achievement tests in the US or some such if one wants to, or by their previous name, or by the publisher. I don't care that much. :)

So, if my model can achieve an R^2 of 0.942, then adding in those indirect effects could add up to 0.058 to the coefficient of determination, and you want me to think that such a miniscule change would so profoundly transform my model that, in its current form, “there is no possibility to say which [variable] has the strongest effect.”

This is because you believe R^2 tells you the predictive power of the direct effects of the variables in the model while in reality it tells you the predictive power of the variables included in the model. In other words, it still regards the total effect.

It's interesting that each time I try to constrain the indirect paths in SEM model to zero, the r² systematically decreases. And I'm thinking that if r² regards only the direct paths, as you imply, this can never happen.

I believe I had a good paper on this subject about the irrelevance of r² in multiple regression, but I'm busy and I cannot find it among the messy millions of documents I have in my computer. (but I'll edit the post later if I find it)

that MR underestimates the true explanatory power of a variable because it only includes direct effects

My earlier comment was clear : the total effect can be either lower, higher, similar than the direct path.

I will probably come here later, because right now, I find it impossible to understand the 2nd part of your comment.

<blockquote>This is because you believe R^2 tells you the predictive power of the direct effects of the variables in the model while in reality it tells you the predictive power of the variables included in the model. In other words, it still regards the total effect.</blockquote>

Do you have a source for any of this logic, or are you coming up with it, yourself? When you say total effect, you mean the true total effect that accounts for longitudinal SEM variable effects??? No, the coefficient of determination is specific to the model under consideration. It is the explained sum of squares divided by the total sum of squares, using the variables specified without additional interaction variables, unless the model includes them.

The whole point of standardized regression coefficients is to be able to compare them. Otherwise, why standardize? Research that uses multiple regression and compares beta coefficients is common, but, as <a href="http://pareonline.net/pdf/v17n9.pdf">Nathans et al</a> point out, “it is often not best to rely only on beta weights when interpreting MR results. In MR applications, independent variables are often intercorrelated, resulting in a statistical phenomenon that is referred to as <b>multicollinearity</b>…” (emphasis added). However, they still recommend calculating beta weights: “It is recommended that all researchers begin MR analyses with beta weights, as they are easily computed with most statistical software packages and can provide an initial rank ordering of variable contributions in one computation. If there are no associations between independent variables <b>or the model is perfectly specified, no other techniques need to be employed”</b> (emphasis added). Berry and Feldman’s Multiple Regression in Practice points out that “because multicollinearity increases the standard errors of coefficient estimators, the major effect of multicollinearity is on significance tests and confidence intervals for regression coefficients. When high multicollinearity is present, confidence intervals for coefficients tend to be very wide, and t-statistics for significance tests tend to be very small” (p. 41). My model achieved high coefficients of determination, which reflects positively upon its specifications. I examined my results to see if the standard errors were high or the confidence intervals were wide. They usually weren’t because the sample sizes were large, (defining sample size by the number of state averages, rather than the number of students). They were worst for the education variable when it meant having at least a high school education. I think it maxed out at 97 with a confidence interval of -200 to 180 (not standardized coefficients), when race was Asian, and income was greater than $20K. I could use this information to support my conclusion that the education variable is not useful when it is high school diploma, but it is difficult to briefly quantify hundreds of regression results. So, the issue of multicollinearity actually strengthens my conclusions.

<blockquote>I find it impossible to understand the 2nd part of your comment.</blockquote>

Reductio ad absurdum takes one to one’s logical conclusion. The whole point is to show that your logical conclusion should not make sense.

Do you have a source for any of this logic, or are you coming up with it, yourself? When you say total effect, you mean the true total effect that accounts for longitudinal SEM variable effects??? No, the coefficient of determination is specific to the model under consideration. It is the explained sum of squares divided by the total sum of squares, using the variables specified without additional interaction variables, unless the model includes them.

The whole point of standardized regression coefficients is to be able to compare them. Otherwise, why standardize? Research that uses multiple regression and compares beta coefficients is common, but, as <a href="http://pareonline.net/pdf/v17n9.pdf">Nathans et al</a> point out, “it is often not best to rely only on beta weights when interpreting MR results. In MR applications, independent variables are often intercorrelated, resulting in a statistical phenomenon that is referred to as <b>multicollinearity</b>…” (emphasis added). However, they still recommend calculating beta weights: “It is recommended that all researchers begin MR analyses with beta weights, as they are easily computed with most statistical software packages and can provide an initial rank ordering of variable contributions in one computation. If there are no associations between independent variables <b>or the model is perfectly specified, no other techniques need to be employed”</b> (emphasis added). Berry and Feldman’s Multiple Regression in Practice points out that “because multicollinearity increases the standard errors of coefficient estimators, the major effect of multicollinearity is on significance tests and confidence intervals for regression coefficients. When high multicollinearity is present, confidence intervals for coefficients tend to be very wide, and t-statistics for significance tests tend to be very small” (p. 41). My model achieved high coefficients of determination, which reflects positively upon its specifications. I examined my results to see if the standard errors were high or the confidence intervals were wide. They usually weren’t because the sample sizes were large, (defining sample size by the number of state averages, rather than the number of students). They were worst for the education variable when it meant having at least a high school education. I think it maxed out at 97 with a confidence interval of -200 to 180 (not standardized coefficients), when race was Asian, and income was greater than $20K. I could use this information to support my conclusion that the education variable is not useful when it is high school diploma, but it is difficult to briefly quantify hundreds of regression results. So, the issue of multicollinearity actually strengthens my conclusions.

<blockquote>I find it impossible to understand the 2nd part of your comment.</blockquote>

Reductio ad absurdum takes one to one’s logical conclusion. The whole point is to show that your logical conclusion should not make sense.

nooffensebut,

Did you update the PDF file in the original post? That's not usually the way we do it here. 1) It removes the previous edits so that one cannot follow the edit changes backwards e.g. to see the original submission. 2) When one updates the file, the thread is not marked as "new post", which means that reviewers don't know that anything has happened.

For these reasons, it is best to post a new reply every time one has a new edit, except in very trivial cases (like changing a typo or something).

Sorry if this practice was not clear. I will add a note about it.

Did you update the PDF file in the original post? That's not usually the way we do it here. 1) It removes the previous edits so that one cannot follow the edit changes backwards e.g. to see the original submission. 2) When one updates the file, the thread is not marked as "new post", which means that reviewers don't know that anything has happened.

For these reasons, it is best to post a new reply every time one has a new edit, except in very trivial cases (like changing a typo or something).

Sorry if this practice was not clear. I will add a note about it.

@Emil

Sorry. I made only slight changes on two occasions, which I briefly described in the original post message.

Sorry. I made only slight changes on two occasions, which I briefly described in the original post message.

So far there are two reviewers who agree to publication (here and here).

I don't have any objections. Meng Hu seems to want to discuss some stuff about how to interpret MR. Are they necessary to resolve (if resolvable) before publication?

I did not see any obvious mistakes in the paper, so I concur with publication. Can Meng Hu say whether he agrees, and if not, what needs to be changed specifically?

I don't have any objections. Meng Hu seems to want to discuss some stuff about how to interpret MR. Are they necessary to resolve (if resolvable) before publication?

I did not see any obvious mistakes in the paper, so I concur with publication. Can Meng Hu say whether he agrees, and if not, what needs to be changed specifically?

Do you have a source for any of this logic, or are you coming up with it, yourself? When you say total effect, you mean the true total effect that accounts for longitudinal SEM variable effects??? No, the coefficient of determination is specific to the model under consideration.

I have, if I remember, already given one proof of it, in the 2nd paragraph in my comment. Also, I thought everyone who is doing MR should know that r² measures the predictive power of a regression model. And by your comment, you show you already know that. The only instance where r² is equal to the regress coeff is when you have only 1 indep var., and in this case your MR is nothing more than a bivariate correlation.

In any case, if my entire point on MR is wrong. That necessarily means the SEM stuff makes no sense at all. That is, there is no such thing as indirect effect or total effect. Before going this far, you must remember that SEM is nothing more than a MR which, however, has more possibilities, e.g., multiple dependent var., longitudinal and repeated measures, model fit, etc. Given that my blog post showed that the direct path in SEM was indeed the regress coeff as shown in MR, I don't have any doubt about my main point. If you disagree again, you have to explain why the direct path in SEM is the exact equivalent of regress coeff in MR.

The whole point of standardized regression coefficients is to be able to compare them. Otherwise, why standardize?

I don't understand. I have already explained this point in my post. It does not matter, B or Beta. You must know the total effect. Or at least make some assumptions. For instance, if you have an indep. var. that is unlikely to be caused by others. Say, if you include parents income and parents education, I'm sure you can assume and write that parents income will not cause parents education, and if there is a direction, it must be education->income.

And I don't also understand the stuff about CI of the education variable which includes zero in its band. Because we are still dealing with the direct path.

Another thing I don't understand is why you have put in bold the term multicollinearity. If you think you don't have multicollinearity, then, does the absence of multicollinearity proves there is no correlation between the indep var. ? i.e., that there is no indirect effect ?

See here, by the way.

If there are no associations between independent variables or the model is perfectly specified, no other techniques need to be employed

The MR is not supposed to give you the r's between the indep. var., or perhaps it's possible to request them, but I don't remember seeing them in SPSS or in Stata anyway. Thus I don't understand how it is possible to make such conclusions. SEM instead can give you the r's between the indep var.

.... .... ....

Emil, normally, I give a(n) (dis)approval+comment if I think I know what to say and what it's about. In the present case, if I accept publication, it means I must endorse a use of MR that I believe is just wrong. Of course, my opinion on that is unlikely to be accepted by others, unless a superman of statistics endorse this view that this is followed by many others. Thus, I don't know what to do. Actually, my feeling is more "no" than "yes" because I don't accept argument from authority, that is, just because most people disagree with me does not mean i'm wrong. This issue must be discussed and we must arrive at an agreement (if we can). So, I'm still continuing the conversation. i want to reach an agreement.

By the way, I don't think it's a problem if I had to refuse to approve. If someone else decides it must be published, I will not say anything more. 3 agreement=publication, it's simple as that. Like I said before, the best strategy is always to side with the majority, even if you're wrong. In other journals, it's clear no one else will take my comments on MR seriously, and everyone will ignore me. Then if you do the same, i will not object.

<blockquote>I don't think Jensen necessarily understands the problem of MR, given what I remember he has written about MR in The g Factor</blockquote>

Well, not only is it true that The g Factor should not have been published, but really all research on IQ and all research that considers race should be stopped because, as you pointed out:

<blockquote>The only way to get your full effects properly is by way of SEM+longitudinal data. There is no other way. Causality must deal with changes (over time). A “static mediation” like what happens when they use cross-sectional data is nonsense.</blockquote>

Thus, the only way to prove the causality of IQ’s influence is to greatly change a person’s IQ, which can be hard to do. In fact, it is impossible to change a person’s race. (<a href="http://www.dailymail.co.uk/news/article-2645950/I-fun-bein-Korean-Blonde-Brazilian-man-undergoes-extraordinary-surgery-achieve-convincing-Oriental-look.html">Or is it???</a>)

<blockquote>I have, if I remember, already given one proof of it, in the 2nd paragraph in my comment.</blockquote>

<blockquote>It's interesting that each time I try to constrain the indirect paths in SEM model to zero, the r² systematically decreases. And I'm thinking that if r² regards only the direct paths, as you imply, this can never happen.</blockquote>

Okay, so beta coefficients can’t see the total effect, but coefficients of determination can because they can in an SEM model. I assume you are referring to some other research project you have, but if it is not MR, how does that prove that the coefficient of determination in an MR model can see the same thing?

<blockquote>I have already explained this point in my post. It does not matter, B or Beta. You must know the total effect.</blockquote>

<blockquote>multiple regression should be used to test whether any variable can be weakened in its direct effect by other independent variables but not to test between them as if they were in competition with each other.</blockquote>

Therefore, any MR study that mentions a beta value is wrong because beta values assume that there is some purpose in standardizing a variable for the purpose of a comparison, which is wrong according to your research.

Well, not only is it true that The g Factor should not have been published, but really all research on IQ and all research that considers race should be stopped because, as you pointed out:

<blockquote>The only way to get your full effects properly is by way of SEM+longitudinal data. There is no other way. Causality must deal with changes (over time). A “static mediation” like what happens when they use cross-sectional data is nonsense.</blockquote>

Thus, the only way to prove the causality of IQ’s influence is to greatly change a person’s IQ, which can be hard to do. In fact, it is impossible to change a person’s race. (<a href="http://www.dailymail.co.uk/news/article-2645950/I-fun-bein-Korean-Blonde-Brazilian-man-undergoes-extraordinary-surgery-achieve-convincing-Oriental-look.html">Or is it???</a>)

<blockquote>I have, if I remember, already given one proof of it, in the 2nd paragraph in my comment.</blockquote>

<blockquote>It's interesting that each time I try to constrain the indirect paths in SEM model to zero, the r² systematically decreases. And I'm thinking that if r² regards only the direct paths, as you imply, this can never happen.</blockquote>

Okay, so beta coefficients can’t see the total effect, but coefficients of determination can because they can in an SEM model. I assume you are referring to some other research project you have, but if it is not MR, how does that prove that the coefficient of determination in an MR model can see the same thing?

<blockquote>I have already explained this point in my post. It does not matter, B or Beta. You must know the total effect.</blockquote>

<blockquote>multiple regression should be used to test whether any variable can be weakened in its direct effect by other independent variables but not to test between them as if they were in competition with each other.</blockquote>

Therefore, any MR study that mentions a beta value is wrong because beta values assume that there is some purpose in standardizing a variable for the purpose of a comparison, which is wrong according to your research.

Heavy metal poisoning seems to be useful in changing one's IQ. Only in the wrong direction, however.

By the usual standards, publication can proceed. I am waiting merely to see if some form of agreement can be found regarding the MR issue. I'm afraid I'm not quite certain what the exact problem is with the paper. MR finds only the strength of the direct path, right. A small or zero b does not prove that a variable has a small total effect, sure. This does not seem to be news.

My understanding is that the point of the paper is to respond to the claim that SAT tests just measure parental income. The author shows that this does not pan out in regressions with other important variables.

By the usual standards, publication can proceed. I am waiting merely to see if some form of agreement can be found regarding the MR issue. I'm afraid I'm not quite certain what the exact problem is with the paper. MR finds only the strength of the direct path, right. A small or zero b does not prove that a variable has a small total effect, sure. This does not seem to be news.

My understanding is that the point of the paper is to respond to the claim that SAT tests just measure parental income. The author shows that this does not pan out in regressions with other important variables.

I said before I had a paper on r². It took me an eternity to find it. Hopefully I got it. the reason why I failed to catch that right in time, perhaps not uniquely due to the fact I have lot of documents, but because of this, I create multiple folders, one for CFA/SEM, DIF, multiple regression, other basic stats things, and all, because too lot of things. But the paper was a presentation of a tool for R package, so I put it there, in my folder "R packages", whereas I was searching again and again in my folder "regressions" and "CFA/SEM"... in vein.

You can have a free link here :

http://www.tarleton.edu/institutionalresearch/documents/BehavorialResearchMethods.pdf

The proof is given directly in table 6, where the R² is the total of the unique and common effects. They also say :

And because I'm a little curious, I have searched through Google scholar, and found this one :

Seibold DR, Mcphee RD. Commonality analysis: A method for decomposing explained variance in multiple regression analyses. Human Communication Research. 1979;5:355–365.

And by the same token, I also found this :

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2930209/

It says the same thing, and I haven't read it carefully (just quickly) but the first one is more than sufficient.

You can see above. But regardless of that, it's common sense. I mean, MR and SEM are both multiple regression. It's the same stuff. It's just that SEM offers more possibilities and thus is even more complex. Look here. Same numbers everywhere.

http://humanvarietiesdotorg.files.wordpress.com/2014/06/multiple-regression-multiple-fallacies-spss-multipleregression.png

http://humanvarietiesdotorg.files.wordpress.com/2014/06/multiple-regression-multiple-fallacies-amos-sem.png

No. In my last comment I have said you can make some assumptions. But this should be either theoretically based on supported/suggested by previous research. This relies on strong assumption of course, and it makes your interpretation very dependent on your assumption. This is doable but you cannot say there is no ambiguity in MR.

Considering how many if not all of the researchers interpret MR coefficient the wrong way, I have the impression this is new to them. Because if not, then I don't understand why they continue to say stuff like that.

In any case, I'm not arguing the author must agree with me here. Like I said, my opinion is not "recognized". However, if you want me to approve, then the author has 2 options :

1. He beats me in the ongoing dual. And I will withdraw my argument, and approve.

2. He chooses to make explicitly in the text that he acknowledges the "possibility" that MR coefficient evaluates the direct effect (not total) of the independent variables, and thus in this case, the total effects of all these variables need not be the same as what is displayed by the MR regress coefficient. For example, he can say (linking to my blogpost) that MR coefficient is the equivalent of the direct path in SEM models. I need only this modification, a sort of "caveat" for the readers, and then I approve. Once again, I don't say he needs to endorse my views, but only shows he is open to this possibility.

Does it sound reasonable ?

I'm Ok with that. I don't doubt the effects of these variables. I'm just thinking about their relative strength, e.g., which one is the strongest. Or what is the total effect of x1, x2 etc...

Some variables have strong effect, e.g., participation, some others have an effect that is close to zero, e.g., income. And like I said before, I doubt income can cause education (unless education is time2 and income time1, in a repeated-measure analysis). But now, like I also said, the more indep var you have, the more you need to disentangle these possible indirect effects, and write either x1 can cause x2, etc.

I never said he needs to redo the analysis. Sure not. But he needs to makes crystal clear how he interprets the output, and specifically, the possible indirect effects.

You can have a free link here :

http://www.tarleton.edu/institutionalresearch/documents/BehavorialResearchMethods.pdf

The proof is given directly in table 6, where the R² is the total of the unique and common effects. They also say :

Also called element analysis, commonality analysis was developed in the 1960s as a method of partitioning variance (R2) into unique and nonunique parts (Mayeske et al., 1969; Mood, 1969, 1971; Newton & Spurrell, 1967).

And because I'm a little curious, I have searched through Google scholar, and found this one :

Seibold DR, Mcphee RD. Commonality analysis: A method for decomposing explained variance in multiple regression analyses. Human Communication Research. 1979;5:355–365.

Whether in the “elements analysis” of Newton and Spurrell(1967), the “components analysis” of Wisler (1969), or the “commonality analysis” of Mood (1969, 1971), it was also noted that the unique effects of all predictors, when added, rarely summed to the total explained variance.

So in the case of even five predictors R2 may be decomposed into 31 elements. Since 26 of these are commonalities, difficulties in interpreting higher-order common effects increase in proportion to increases in the number of predictors.

And by the same token, I also found this :

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2930209/

It says the same thing, and I haven't read it carefully (just quickly) but the first one is more than sufficient.

Okay, so beta coefficients can’t see the total effect, but coefficients of determination can because they can in an SEM model. I assume you are referring to some other research project you have, but if it is not MR, how does that prove that the coefficient of determination in an MR model can see the same thing?

You can see above. But regardless of that, it's common sense. I mean, MR and SEM are both multiple regression. It's the same stuff. It's just that SEM offers more possibilities and thus is even more complex. Look here. Same numbers everywhere.

http://humanvarietiesdotorg.files.wordpress.com/2014/06/multiple-regression-multiple-fallacies-spss-multipleregression.png

http://humanvarietiesdotorg.files.wordpress.com/2014/06/multiple-regression-multiple-fallacies-amos-sem.png

Therefore, any MR study that mentions a beta value is wrong because beta values assume that there is some purpose in standardizing a variable for the purpose of a comparison, which is wrong according to your research.

No. In my last comment I have said you can make some assumptions. But this should be either theoretically based on supported/suggested by previous research. This relies on strong assumption of course, and it makes your interpretation very dependent on your assumption. This is doable but you cannot say there is no ambiguity in MR.

A small or zero b does not prove that a variable has a small total effect, sure. This does not seem to be news.

Considering how many if not all of the researchers interpret MR coefficient the wrong way, I have the impression this is new to them. Because if not, then I don't understand why they continue to say stuff like that.

In any case, I'm not arguing the author must agree with me here. Like I said, my opinion is not "recognized". However, if you want me to approve, then the author has 2 options :

1. He beats me in the ongoing dual. And I will withdraw my argument, and approve.

2. He chooses to make explicitly in the text that he acknowledges the "possibility" that MR coefficient evaluates the direct effect (not total) of the independent variables, and thus in this case, the total effects of all these variables need not be the same as what is displayed by the MR regress coefficient. For example, he can say (linking to my blogpost) that MR coefficient is the equivalent of the direct path in SEM models. I need only this modification, a sort of "caveat" for the readers, and then I approve. Once again, I don't say he needs to endorse my views, but only shows he is open to this possibility.

Does it sound reasonable ?

**EDIT.**My understanding is that the point of the paper is to respond to the claim that SAT tests just measure parental income. The author shows that this does not pan out in regressions with other important variables.

I'm Ok with that. I don't doubt the effects of these variables. I'm just thinking about their relative strength, e.g., which one is the strongest. Or what is the total effect of x1, x2 etc...

Some variables have strong effect, e.g., participation, some others have an effect that is close to zero, e.g., income. And like I said before, I doubt income can cause education (unless education is time2 and income time1, in a repeated-measure analysis). But now, like I also said, the more indep var you have, the more you need to disentangle these possible indirect effects, and write either x1 can cause x2, etc.

I never said he needs to redo the analysis. Sure not. But he needs to makes crystal clear how he interprets the output, and specifically, the possible indirect effects.