I still remember my first major rejection—a paper I’d worked on for eight months, rejected with a two-line email from the editor. No peer review. Just “does not fit our scope.” I was devastated. I’d poured late nights, weekend data analysis sessions, and countless revisions into that manuscript. But that rejection taught me something crucial: the problem wasn’t my research; it was my targeting.
Three weeks later, I resubmitted the same manuscript to a specialized energy systems journal—one whose recent issues I’d actually studied this time. It was accepted with minor revisions. That experience shaped how I approach every submission now, and it’s why I’m writing this guide.
Manuscript rejection is one of the most common—and misunderstood—experiences in academic publishing. A rejected paper does not automatically mean poor research. In reality, journals reject manuscripts for a combination of scope fit, novelty, methodological rigor, clarity, ethical compliance, and editorial priorities.
As someone who has navigated 43 manuscript submissions over the past nine years, served as a peer reviewer for IEEE, Elsevier, and Springer journals, and mentored 17 graduate students through the publication process, I can confidently say this: most rejections are avoidable with better positioning and preparation.
This guide explains the real reasons journals reject manuscripts, how editors actually make decisions, and the practical steps authors should take after rejection.
What Journal Rejection Really Means
A journal rejection simply means the editor has decided not to proceed with your manuscript at that journal. It does not invalidate the research itself.
Rejections typically fall into three categories:
- Desk rejection – decision made by the editor before peer review
- Post-review rejection – decision after external review
- Reject after revision – manuscript revised but still judged unsuitable
Each type provides different diagnostic information about your paper’s strengths and weaknesses. Understanding what happens during the “Under Review” stage clarifies how long evaluation takes and why delays occur.
In my analysis of 47 decision letters from my department between 2022 and 2024, approximately 35% were desk rejections, 50% were post-review rejections, and 15% occurred after revision attempts. Understanding which category your rejection falls into determines your next move.
1. Scope Mismatch with the Journal
Scope mismatch remains one of the leading causes of desk rejection—and it’s the mistake I made with my first major rejection.
Editors assess whether:
- Your topic aligns with the journal’s aims and readership
- Similar articles appear in recent issues
- The framing fits the journal’s disciplinary focus
A technically strong paper may still be rejected if it targets the wrong audience.
A colleague recently shared a frustrating experience: She’d spent months on a microgrid optimization study—solid methodology, rigorous modeling, clear results. She submitted to a general electrical engineering journal, expecting her work to appeal to a broad audience. Desk rejected in five days. The reason? The journal rarely published power systems papers, focusing instead on semiconductors and circuits. When she resubmitted to IEEE Transactions on Smart Grid two months later, the paper sailed through review. Same research, different audience.
I see this pattern constantly in my editorial work. Authors pour energy into excellent research but stumble at the last step: matching their work to the right publication venue.
Practical advice: Before submission, analyze at least 10 recent articles from the journal and compare:
- Research methods
- Writing tone
- Level of theory vs application
- Geographic or sectoral focus
- Type of contributions (theoretical, experimental, computational)
I now maintain a spreadsheet tracking journal characteristics for my field. It saves me from emotional submission decisions after rejection.
2. Insufficient Novelty or Contribution
Journals prioritize work that advances knowledge, not studies that merely repeat known findings.
Here’s a rejection comment that still makes me wince: “This is competent work, but it confirms what we already suspect.” The reviewer wasn’t wrong. I’d gotten so focused on executing a clean experiment that I forgot to ask whether anyone needed the answer. My contribution section was weak—two vague sentences about “filling gaps” without explaining what knowledge would change if people read my paper.
The turnaround came when I dug back into my results and found something I’d almost dismissed: a 23% efficiency gain, but only under a specific operating condition nobody had tested before. That became my hook. I rewrote the introduction around this unexpected finding, submitted to a different journal, and got acceptance with minor revisions.
Common novelty-related rejection triggers:
- Incremental improvements without justification
- Replication studies without new insights
- Weak articulation of the contribution
- Failure to connect findings to broader implications
Editors want a clear answer to:
- What is new here?
- Why does it matter now?
- Who benefits from this work?
If this is unclear in the abstract and introduction, rejection becomes likely.
My rule now: If I can’t explain my contribution in two sentences to a colleague outside my subfield, my framing needs work before submission.
3. Methodological or Technical Weaknesses
From an editor’s perspective, methods are non-negotiable.
Frequent issues include:
- Inadequate sample sizes
- Weak experimental or simulation validation
- Statistical misuse
- Poor reproducibility
- Missing error analysis or uncertainty quantification
Reviewers are trained to detect whether flaws are correctable or fundamental. If the latter, rejection is often final.
During one revision cycle, I received feedback that made my stomach drop: “The authors claim statistical significance with n=12. This sample size cannot support their conclusions.” I’d convinced myself that twelve carefully selected cases were sufficient. They weren’t. We spent six months expanding the study to n=47, then resubmitted. The paper was accepted, and the additional data actually revealed patterns we’d missed in the smaller sample.
Looking back, I should have caught this before submission. The reviewer was absolutely right.
Expert tip: Have a colleague outside your research group review your methodology section before submission. They’ll catch assumptions you’ve internalized but haven’t explained.
4. Poor Writing, Structure, or Logical Flow
A manuscript may be rejected simply because it is hard to follow.
There’s a researcher I work with—brilliant engineer, innovative thinker—who kept getting desk rejections despite producing genuinely novel experimental data. I finally asked to read one of his manuscripts. Within two pages, I understood the problem. His papers read like technical reports: dense paragraphs, passive voice everywhere, results presented before the reader understood what problem was being solved.
We spent an afternoon restructuring just the introduction using a simple framework: What’s the problem? Why hasn’t anyone solved it? What did we do differently? Why does it matter? His next submission was accepted within eight weeks. Nothing changed about the research itself—only how the story was told.
Editors and reviewers handle dozens of papers. If the argument is unclear or poorly structured, they may conclude the work is not ready.
Typical problems:
- Disorganized sections
- Ambiguous claims
- Overly complex sentences
- Jargon without definition
- Results presented before methods are clear
- Discussion that merely repeats results
Solution: Language editing is not cosmetic—it directly affects acceptance probability. I now budget for professional editing on every manuscript targeting high-impact journals. The investment has paid for itself many times over in reduced revision cycles.
5. Weak or Outdated Literature Review
A shallow literature review signals:
- Limited engagement with the field
- Poor positioning of the research problem
- Lack of awareness of competing or contradictory work
Editors expect:
- Recent references (especially the last 5 years)
- Proper citation of foundational and competing work
- Clear identification of gaps
- Acknowledgment of limitations in existing literature
Without this, the manuscript appears disconnected from ongoing research.
An example from my recent reviews: I received a paper that cited only three references from the past five years—in a rapidly evolving field where two major review papers addressing their exact research question had been published within 18 months. The authors seemed genuinely unaware that these reviews existed. Their “research gap” looked nonexistent once you knew the recent literature. I had to recommend rejection.
The frustrating part? Their experimental work was actually good. If they’d positioned it differently—as validation of findings from those reviews, or as an extension addressing limitations mentioned in them—the paper might have succeeded.
My practice: I set up Google Scholar alerts for key terms in my research area. This keeps my literature knowledge current between projects.
6. Ethical and Compliance Issues
Ethical concerns lead to immediate rejection, sometimes without reviewer input. For guidance on ethical compliance, see the Committee on Publication Ethics (COPE) guidelines
A colleague once received a rejection letter that included a phrase I’d never seen before: “substantial overlap with your previously published work.” Turns out he’d reused large portions of his literature review from an earlier paper without proper citation—even though he’d written the original text himself. The journal’s plagiarism software flagged 38% similarity with his previous publication.
That incident didn’t just kill the submission. It delayed his tenure review by six months while the department investigated whether there were other instances. Self-plagiarism seemed like a minor shortcut at the time. The consequences weren’t minor at all.
Examples of ethical violations include:
- Plagiarism or excessive self-plagiarism (typically >30% similarity)
- Duplicate submission
- Undisclosed conflicts of interest
- Data fabrication or manipulation
- Missing ethics approvals for human/animal subjects
- Failure to acknowledge funding sources
These issues can damage an author’s long-term publishing credibility, not just one submission. Journal editors share information about ethical violations through databases like the Committee on Publication Ethics (COPE).
7. Conflicting Reviewer Recommendations
In some cases, reviewers strongly disagree.
This often happens in:
- Interdisciplinary research
- Emerging or controversial topics
- Novel methodologies without established benchmarks
If reviewer feedback does not converge toward a clear revision path, editors may reject rather than prolong uncertainty.
I attended an editorial board meeting in 2023 where this played out in real time. One reviewer had rated a paper “excellent—accept with minor revisions.” Another called the same work “fundamentally flawed—reject outright.” Same manuscript, opposite conclusions. The editor explained their decision: rather than bringing in a third reviewer (adding 3-4 months), they rejected the paper and suggested the authors clarify their methodology for a more specialized journal where reviewers would have deeper domain expertise.
The authors weren’t happy, but the decision made sense. The disagreement signaled that the work was too far outside the journal’s typical scope.
This outcome reflects editorial judgment, not necessarily research quality.
8. Unclear Theoretical or Practical Impact
Editors routinely ask:
- “Who will care about this?”
- “Does this change practice, theory, or understanding?”
Manuscripts that fail to articulate impact beyond technical correctness often struggle.
Impact should be stated clearly in:
- Introduction (why this matters)
- Discussion (what it means)
- Conclusion (who should act on this)
I learned this lesson through a particularly blunt rejection. Here’s the relevant excerpt (with identifying details removed):
“While your experimental design is sound, the contribution is incremental. We receive many submissions in this area and must prioritize work that significantly advances the field.”
Translation: The work was technically fine. The positioning was terrible. I’d buried the most interesting finding—a 40% cost reduction compared to existing methods—in paragraph six of the results section. The abstract mentioned it in passing. No wonder the editor wasn’t impressed.
When I repositioned the same work for an applied engineering journal—leading with the cost implications, explaining who would benefit, connecting it to industry challenges—it was accepted within 10 weeks. The research hadn’t changed. The framing made all the difference.
9. Inadequate Response to Reviewer Comments
For revised submissions, rejection often results from how authors respond, not what reviewers asked.
Part of the problem is emotional rather than technical. The long, uncertain waiting period after peer review often amplifies frustration and defensiveness. Understanding how long peer review takes helps set realistic expectations and reduces anxiety during the waiting period, making it easier to respond thoughtfully instead of reactively.
I once drafted a response to a harsh review that began: “The reviewer clearly misunderstood our methodology…” My co-author read it and said, “That’s going to get us rejected.” She was right.
We rewrote it as: “We appreciate the reviewer’s concern about methodological clarity. To address this, we have expanded Section 2.3, added Figure 4 showing the validation process, and included additional discussion in lines 234–247.”
That paper was accepted. My defensive draft would have killed it.
Common mistakes:
- Ignoring critical comments
- Defensive tone
- Vague responses without evidence of revision
- Claiming “this is beyond the scope” for legitimate concerns
- Making changes without explaining them in the response letter
Editors expect point-by-point, respectful, and documented responses. I use a three-column table format:
| Reviewer Comment | Our Response | Changes Made (page/line) |
This format forces clarity and shows you’ve addressed every concern systematically.
10. Journal Capacity and Editorial Priorities
Not all rejections are content-based.
Journals may reject manuscripts because:
- Issue capacity is limited (especially for print journals)
- Topic priorities have shifted
- Submission volume is unusually high
- Special issues have consumed regular slots
- The journal is trying to increase selectivity for impact factor reasons
At high-impact journals, strong papers are rejected regularly due to competition alone. Nature and Science reject over 90% of submissions—many of which go on to be highly cited elsewhere.
An associate editor at a Springer journal once told me something surprising: In late 2023, her editorial team rejected several solid papers simply because they’d already accepted too many in that topic area. The papers were fine. The timing was wrong. The journal needed to maintain topical diversity across issues.
She felt bad about it, but capacity is capacity. This is why journal selection strategy matters as much as paper quality sometimes.
How Editors Actually Make Decisions
Based on editorial guidelines I’ve reviewed from Elsevier, Springer, IEEE, and SAGE, plus conversations with editors at conferences, editors consider:
- Reviewer expertise and consistency – Do the reviewers agree? Are they qualified?
- Whether issues are fixable – Can this be addressed in one revision cycle?
- Contribution to the journal’s identity – Does this strengthen our reputation?
- Ethical and reporting compliance – Are standards met?
- Strategic fit – Does this topic balance our recent publications?
Poor or biased reviews may be discounted, but editors still prioritize clarity and contribution.
From my review of decision letters, I’ve noticed that editors often add personal commentary when they see potential: “While we cannot accept this version, we encourage major revision focusing on…” versus the template rejection: “We are unable to accept your manuscript for publication.”
What to Do After a Rejection
Here’s my personal protocol, refined over 43 submissions:
1. The 48-Hour Rule
I wait two full days before reading rejection letters a second time. The first reading is emotional; the second is analytical. This has saved me from making reactive decisions I’d regret.
A graduate student I mentor once wanted to email an editor immediately after rejection, questioning their decision. I told her to wait 48 hours, then read the letter again. Two days later, she said, “Actually, the reviewer has a point about my sample size.” That realization led to a much stronger revised paper rather than a burned bridge.
2. Classify the Rejection
Ask yourself:
- Was this scope-related? (different journal needed)
- Was this methodology? (fundamental revision needed)
- Was this a presentation? (rewriting needed)
- Was this novelty? (repositioning needed)
3. Fix Fundamental Problems First
Don’t just resubmit elsewhere if there are real issues with:
- Data quality or sample size
- Methodological rigor
- Contribution framing
- Ethical compliance
I made this mistake early in my career—resubmitted a rejected paper to three different journals without fixing the core methodological criticism. Three rejections later, I finally addressed the actual problem reviewers kept identifying. Once I expanded my validation dataset, the paper was accepted immediately at the fourth journal.
I’d wasted eight months being stubborn when I could have spent two months being smart.
4. Select a Better-Fit Journal
I now use the 3-Journal Rule: Before writing, I identify three target journals:
- Journal A (ambitious reach)
- Journal B (solid fit)
- Journal C (safety option)
If Journal A rejects, I already know where to submit next without emotional decision-making. This has cut my time-to-publication by an average of 3.2 months based on my tracking data.
5. Improve Clarity and Structure
Especially in:
- Abstract (one paragraph = one idea)
- Introduction (problem → gap → solution → contribution)
- Figure captions (should stand alone)
- Conclusion (impact, not summary)
Many accepted papers were previously rejected elsewhere. My most-cited paper (127 citations as of December 2024) was rejected twice before finding its home.
Lessons from My Publishing Journey
After 43 submissions, 28 published papers, and countless reviewer comments, here’s what I wish I’d known earlier:
The Reviewer Commentary Database
I keep a document of all substantive reviewer comments I’ve received—even harsh ones. Patterns emerge. If three different reviewers across different papers question your methodology justification, that’s your writing problem, not their misunderstanding.
Current patterns in my database:
- “Novelty unclear” → I now lead with contribution statements
- “Literature review incomplete” → I now use structured search protocols
- “Figures hard to read” → I hired a graphics designer
This database has been more valuable than any writing guide.
The Acceptance Mindset
I reframe rejection as “not yet accepted here” rather than “rejected.” This isn’t just positive thinking—it’s accurate. The work has value; it needs the right venue.
Statistics from my department (2020-2024, n=89 papers):
- 67% of rejected papers were eventually published
- Average time from first submission to acceptance: 14.3 months
- Papers revised after rejection received 31% more citations than those accepted immediately (possibly because revision improved quality)
The Impact Factor Trap
I stopped obsessing over impact factors after a mentor told me, “A published paper in a mid-tier journal beats an unpublished paper in your drawer aiming for Nature.”
My strategy now: Match ambition to evidence strength. Groundbreaking findings → high-impact journals. Solid incremental work → respected field-specific journals.
Both types of papers contribute to your career. The key is knowing which is which.
Revising vs. Submitting Elsewhere
If revision is encouraged (“We invite major revision” or “We encourage resubmission after addressing…”):
- Take it seriously—editors rarely offer this without genuine interest
- Address every reviewer comment explicitly
- Add new data if requested (don’t just rewrite)
- Resubmit within the suggested timeframe (typically 2-4 months)
If rejection is final due to scope or impact (“not suitable for our journal” or “does not meet our standards”):
- Submit to a more suitable journal
- Do not resubmit to the same journal without an invitation
- Learn from reviewer comments, even if harsh
Knowing this distinction saves time and reduces frustration. I’ve watched colleagues waste six months trying to revise for journals that had already closed the door.
Common Myths About Journal Rejection
Myth 1: Rejection means bad research
Reality: My most-cited paper was rejected twice. High-quality papers get rejected frequently due to fit, timing, or reviewer disagreement.
Myth 2: Editors are attacking authors personally
Reality: Editors handle hundreds of papers. Decisions are professional, not personal. I’ve never met an editor who enjoyed rejecting good work.
Myth 3: Reviewer #2 is always wrong
Reality: Sometimes Reviewer #2 is the only one who reads carefully. I’ve been that reviewer.
Myth 4: You should only submit to top journals
Reality: My fastest time from submission to acceptance was 6 weeks—at a specialized journal, not a high-impact one. Strategic targeting beats ambitious reaching.
Myth 5: Rejection rates are rising because standards are increasing
Reality: Submission volumes have increased faster than journal capacity. More competition, not necessarily higher standards.
Understanding this helps researchers publish more strategically rather than emotionally.
Final Takeaway: Rejection is Data, Not Defeat
Journals reject manuscripts for specific, identifiable reasons—most commonly scope mismatch, lack of novelty, methodological weakness, unclear writing, ethical concerns, or editorial constraints.
After nine years in academic publishing, here’s what I know for certain: Every rejection teaches you something. My rejection-to-acceptance ratio has improved from 1:4 (early career) to 1:1.8 (current) not because my research got dramatically better, but because I learned to:
- Target journals strategically before writing
- Frame contributions clearly in abstracts
- Write for busy reviewers, not just expert colleagues
- Respond to criticism with evidence, not emotion
- Treat revision invitations as opportunities, not burdens
Authors who understand these factors, revise strategically, and choose appropriate journals significantly increase their chances of publication.
The question isn’t “Will I face rejection?” but “What will I learn from it?”
Have a rejection story or question? Share in the comments—I read and respond to every one.


Pingback: How to Respond to Peer Review Comments Effectively - ije2.com
Pingback: How Journal Editors Make Final Decisions: Behind the Scenes - ije2.com
Pingback: "Decision in Process" Meaning: What Editors Are Doing - ije2.com
Pingback: What to Do After Manuscript Rejection: Complete Guide (2026) - ije2.com
Pingback: Journal Editorial Decision Process Explained Clearly