✨ Memorial Day Mega Sale is LIVE — Up to 70% off across 200+ brands. Shop the deals →
Editorial Standards 📋 Methodology · Updated Quarterly · 📅 Last updated April 2026

How We Test — Our Methodology, Lab, & Editorial Standards

Inside the 9,800+ hours of testing that powers every recommendation on Price & Pick. Our 14-person editorial team evaluates 1,400+ products per year across electronics, home, beauty, fashion, fitness, and travel. Independent, rigorous, and never paid for placement. Cross-reference with our about page, affiliate disclosure, or our 2026 award winners.

1,400+
Products tested yearly
9,800+
Annual testing hours
14
Full-time reviewers
$0
Paid for placement
🔬 Inside the Testing Lab
📊🔬📋
From unboxing to scoring rubric — every product on Price & Pick goes through the same 6-stage evaluation, regardless of brand or price tier

The 4 editorial principles that guide every review

Our methodology starts before any product hits the lab. These four principles shape every recommendation you see across our reviews, buying guides, and 2026 awards. They're non-negotiable.

01
Independence

Editorial decisions are never paid placements

We never accept payment for placement, ranking, or favorable coverage. The gold-winning Apple Watch Series 10 earns roughly the same affiliate rate as products that didn't win at all. Brands cannot pay to appear in our best-of guides or modify our scoring. Read our complete affiliate disclosure for full transparency on how we earn commissions without compromising rankings.

02
Rigor

Testing duration matches category demands

A vacuum tested for 8 hours tells us nothing. Vacuums get 8+ weeks of daily use; mattresses get 30+ nights minimum; tech gets 4-6 weeks; apparel gets 30+ wears with 10+ washes; cookware gets 50+ cooks. We cross-reference with The Verge, CNET, Wirecutter, and Rtings — but we don't defer to them.

03
Transparency

Methodology, not opinion alone

Every score is justified by 5 weighted dimensions with explicit criteria: performance (40%), build quality & longevity (20%), value (15%), customer service & warranty (15%), and editorial X-factor (10%). When our findings disagree with consensus (e.g., the TCL QM7 outperforming pricier sets in our lab), we explain why. See our complete scoring rubric below.

04
Reader-First

We recommend what we'd buy ourselves

Our reviewers buy products with their own money or return loaner units after testing — we never keep brand-supplied gifts. We aim for the question, "would I tell my mom to buy this?" If the answer isn't yes, we don't recommend. Browse our complete editorial standards or read about the products that did make the cut. For our most-trusted picks, see our review hub.

The 6-stage testing process — from unboxing to publish

Every product on Price & Pick goes through the same six stages, regardless of brand or price tier. The whole process takes 4 weeks for tech, 8-12 weeks for home and sleep, and 6-8 weeks for fashion. Our published reviews reflect this depth.

Stage 01 ⏱️ 1-2 days

Sourcing & unboxing

We source products three ways: (1) buy retail with editorial budget (preferred — guarantees impartiality), (2) loaner units from manufacturers (returned after testing, never kept), or (3) reader-submitted units for niche items. Every product is unboxed on video for our records, with packaging condition, included accessories, and first-impression notes logged.

  • Out-of-box experience assessed: instructions clarity, setup time, missing accessories
  • Cross-referenced against Amazon reviews and Best Buy Q&A for known issues
  • Initial photo set captured for review pages and comparison tool
Stage 02 ⏱️ 1 week

Lab benchmarking & controlled tests

Quantifiable metrics get measured first. Audio products go through frequency response sweeps and ANC measurements. Vacuums get tested on standardized debris (Cheerios, sand, pet hair) across hardwood, carpet, and tile. TVs get colorimeter-measured for color accuracy and brightness. Mattresses get pressure-mapped. The goal: numbers we can defend.

  • All measurements logged in shared database — accessible to other reviewers for cross-checking
  • Standardized test protocols matched against Rtings and industry reference values
  • Calibration verified weekly with reference test patterns and known-good devices
Stage 03 ⏱️ 3-8 weeks

Real-world longitudinal testing

This is where most outlets fall short — and where we go deepest. Products live in our reviewers' actual homes for weeks of daily use. Dyson V15 vacuums handle three different homes (one with shedding labrador). Lululemon Align leggings get worn, washed, and re-tested 30+ times. Real-world failure modes only show up here.

  • Daily use logs maintained — performance over time matters more than out-of-box
  • Battery degradation, fabric pilling, paint chipping, electronics overheating all tracked
  • Sample size: minimum 2 reviewers per product for personal-fit categories (mattresses, apparel, beauty)
Stage 04 ⏱️ 1 week

Comparative benchmarking

No product exists in isolation. Sony WH-1000XM5 gets tested against Bose QuietComfort Ultra on the same flight. Saatva mattresses get pressure-mapped against Tempur-Pedic and Casper. We power our comparison tool with this data.

  • Side-by-side testing performed by same reviewer to eliminate observer variance
  • Reviews cross-referenced against The Verge and CNET findings
  • Disagreements with consensus flagged and re-tested by second reviewer
Stage 05 ⏱️ 2-3 days

Scoring & internal review

Every product gets scored across our 5 weighted dimensions by the lead reviewer. The score is then peer-reviewed by 2 additional team members who independently score the same product. If the three scores diverge by more than 15%, we re-test. If they align, the average becomes the final score.

  • Editorial standards meeting held every Monday — disputed scores discussed openly
  • Final scores published with the published review for full transparency
  • Award medals (gold/silver/bronze) assigned by awards committee quarterly
Stage 06 ⏱️ 3-5 days

Publication & continuous monitoring

The review goes live on our reviews hub. We don't walk away after publishing. Every winning product is re-checked at 6 months for long-term durability — products that fail post-publication get demoted with a transparent note. Reader feedback via our contact page feeds back into the next quarterly review.

  • 6-month, 12-month, and 24-month long-term updates published as separate articles (e.g. our AirPods Pro 2 long-term review)
  • Reader-reported issues investigated and added to "Known Issues" sections of reviews
  • Product de-listed and replaced if its quality measurably drops post-launch

The scoring rubric — how products earn medals

Every product is scored across 5 weighted dimensions on a 0-100 scale. The weighted total determines the final score. Gold winners score 90+, silver 85-89, bronze 80-84, editor's pick 75-79. See real scores in action on our 2026 awards page.

DimensionWeightWhat we measure
Performance 40% How well does the product do its core job? Quantitative where possible (battery life, suction, sound quality, color accuracy), qualitative where not (does it solve the problem). Most weight goes here because nothing else matters if performance fails.
Build quality & longevity 20% Will it last 3-5 years? Materials, manufacturing, fit-and-finish, repairability. The Le Creuset Dutch Oven scores high here for lifetime warranty + replaceable parts. Cheap plastic = automatic deduction.
Value 15% Performance per dollar. The TCL QM7 won bronze for value despite being beaten on raw performance by an OLED twice the price. We weight this lower because pure cheap-and-cheerful rarely wins overall.
Customer service & warranty 15% What happens when things go wrong? Saatva's 365-night trial + lifetime warranty earns full marks. We test customer service by submitting real support tickets anonymously and timing responses.
Editorial X-factor 10% Design, brand integrity, ecosystem fit, the indefinable "would I recommend this to my mom." Where editorial judgment lives. Read about why our 2026 picks won for examples.
Weighted total100%Score 90+ = Gold · 85-89 = Silver · 80-84 = Bronze · 75-79 = Editor's Pick

Category-specific testing methodology

Different categories demand different approaches. A laptop tested for 4 weeks tells us a lot. A mattress tested for 4 weeks barely scratches the surface. Here's how we adapt our process for the 6 categories we cover most.

📱

Electronics & Tech

⏱️ 4-6 weeks of testing

Laptops, phones, headphones, TVs, gaming, smart home. Every product gets controlled lab benchmarks (battery, brightness, sound) plus daily real-world use. Cross-tested against direct competitors — Apple vs Samsung vs Sony. Read our electronics category.

Key criteria
  • Battery life
  • Display quality
  • Audio fidelity
  • Build durability
  • OS & update support
  • Repairability
  • Ecosystem fit
🏠

Home & Kitchen

⏱️ 6-12 weeks of testing

Vacuums, mattresses, cookware, appliances, smart home. Mattresses get 30+ nights minimum across multiple sleepers; vacuums tackle three different homes. Cookware gets 50+ cooks. Brand reliability data tracked across 6+ months. Read our home category.

Key criteria
  • Daily-use durability
  • Cleaning performance
  • Sleep quality (mattress)
  • Heat distribution (cookware)
  • Warranty terms
  • Replacement parts
  • Energy efficiency
💄

Beauty & Skincare

⏱️ 4-8 weeks of testing

Skincare, makeup, hair tools, fragrance. Skincare gets 4-6 weeks minimum for results to show. Tested across 2-3 skin types per product. Ingredient lists cross-checked against EWG and dermatologist consultants. Read our clean beauty marketing investigation.

Key criteria
  • Ingredient transparency
  • Visible results
  • Skin sensitivity
  • Texture & absorption
  • Packaging
  • Value per use
  • Cruelty-free status
👗

Fashion & Apparel

⏱️ 6-8 weeks of testing

Clothing, footwear, bags, accessories. Apparel gets worn 30+ times with 10+ washes to test fade, pilling, shape retention. Footwear tested by 2 reviewers with different gait patterns over 100+ miles. Read our spring 2026 fashion trends.

Key criteria
  • Fabric quality
  • Stitching integrity
  • Wash durability
  • Sizing accuracy
  • Fit consistency
  • Sustainability
  • Resale value
💪

Fitness & Wellness

⏱️ 6-10 weeks of testing

Connected fitness, recovery tools, wearables, supplements. Fitness equipment gets 6+ weeks of training with 3+ reviewers (different fitness levels). Wearables compared against Garmin and chest-strap heart rate monitors for accuracy. Read our marathon training gear guide.

Key criteria
  • Heart rate accuracy
  • GPS precision
  • Build durability
  • App ecosystem
  • Comfort during use
  • Subscription costs
  • Recovery tracking
✈️

Travel & Outdoor

⏱️ 4-6 weeks across 2+ trips

Luggage, backpacks, outdoor gear, travel tech. Luggage gets minimum 2 real flights to test wheel durability, zipper quality, strap fatigue. Outdoor gear field-tested in actual conditions, not lab simulations. Read our travel backpacks guide.

Key criteria
  • Zipper reliability
  • Wheel durability
  • Compartment design
  • Weather resistance
  • TSA-friendly
  • Warranty support
  • Weight efficiency

Inside our testing lab

Our 2,400 sq ft Brooklyn lab houses category-specific testing stations with calibrated instruments. Every measurement is reproducible. Cross-reference our equipment list with our about page or request a lab visit.

🎧

Audio Lab

Anechoic-treated room for headphone, speaker, and earbud testing. ANC measurements via head and torso simulator (HATS) — same setup used by The Verge's audio team.

EquipmentBrüel & Kjær 4128-C HATS · Audio Precision APx515 · GRAS 43AG · pink-noise reference signal generator
📺

Display Lab

Calibrated darkroom for TV, monitor, and laptop screen testing. Color accuracy, peak brightness, viewing angles measured to industry-standard methodology used by Rtings.

EquipmentKlein K10-A colorimeter · X-Rite i1Pro 3 spectrophotometer · CalMAN Ultimate · Murideo SIX-G test pattern generator
🍳

Kitchen & Cookware Lab

Climate-controlled prep kitchen with calibrated thermometers, infrared cameras, and standardized recipes. Le Creuset, KitchenAid, Ninja all tested here.

EquipmentFLIR E60 thermal camera · ThermoWorks Thermapen · 12-cycle dishwasher endurance rig · standardized test recipes
🏋️

Fitness & Wearables Lab

Treadmill + bike testing rig with reference heart rate monitor, GPS truth tracker. Wearables from Fitbit, WHOOP, Garmin, Apple Watch compared.

EquipmentPolar H10 chest-strap reference HR · Stryd Power Pod · 5km calibrated indoor track · sleep-stage reference EEG
💄

Beauty & Skincare Lab

Climate-controlled (40% humidity, 21°C) with skin moisture meters and dermatological consultation. Tatcha, Glossier, Drunk Elephant all tested here.

EquipmentCorneometer CM 825 hydration meter · Sebumeter SM 815 · Cutometer dual MPA 580 elasticity meter · pH 1010L meter
👕

Apparel & Textile Lab

Fabric durability, color-fastness, and shrinkage tested across 10+ wash cycles. Standardized seam-strength tests. Lululemon, Patagonia, Allbirds all tested here.

EquipmentMartindale abrasion tester · spectrophotometer for color delta-E · seam slippage rig · 30L commercial washer/dryer
⭐ Editorial Independence

Why we don't just trust review aggregators

Amazon ratings, sponsored YouTube videos, and "top 10" listicles dominate product discovery — but they're built for engagement, not accuracy. We've documented Amazon ratings drift due to review brigades and incentivized reviewers, while sponsored content rarely tells you which products got passed over.

Our methodology is built to expose what the consensus misses. When the TCL QM7 outperformed sets twice its price in our color accuracy testing, we said so — even though most outlets reflexively defer to LG and Samsung. Read our complete disclosure or our 2026 award winners.

🔬

Meet the 14-person editorial team

Our reviewers come from journalism, engineering, design, and academia. Every team member has 5+ years of professional reviewing experience. Browse author bylines on our blog or reach out via the contact page.

JT

Jordan Taylor

Head of Testing Lab

9 years in connected home tech. Built 14 smart home setups for friends and family. Authored our $300 Smart Home Starter Kit.

MR

Maya Rodriguez

Senior Editor · Audio & Video

Former audio engineer at a Brooklyn recording studio. Authored our Sony vs Bose ANC showdown and Sony WH-1000XM5 review.

AK

Amelia Kim

Beauty & Skincare Lead

Licensed esthetician + 7 years reviewing skincare. Investigated clean beauty marketing and built our skincare for beginners guide.

DS

Daniel Singh

Sleep Lab Director

Sleep researcher + former mattress consultant. Tests every mattress for 30+ nights. Authored our best mattresses of 2026 guide.

EL

Erin Lee

Fitness & Wearables Lead

Marathon runner (3:12 PR), former Fitbit field tester. Authored our marathon training gear guide.

RB

Rohan Banerjee

Tech & Computing Editor

Former Linux kernel contributor. Tests laptops and reviews productivity software. Authored our complete laptop buyer's guide.

SB

Sasha Brooks

Fashion & Apparel Editor

10 years in editorial fashion. Tests every garment for 30+ wears. Authored our Lululemon vs Athleta comparison.

CW

Chen Wei

Home & Kitchen Editor

Former Williams Sonoma product specialist. Tests cookware across 50+ cooks. Authored our best air fryers of 2026.

How we maintain editorial independence

Independence is the foundation of every recommendation. Brands cannot pay for placement, ranking, or favorable coverage — period. Our affiliate model means we earn commission when readers click through and buy, but commissions don't influence rankings: the gold-winning Apple Watch Series 10 earns roughly the same affiliate rate as products that didn't win at all.

Our reviewers buy products with editorial budget or return loaner units after testing. We never keep brand-supplied gifts. Disagreements with consensus (when our findings contradict The Verge or CNET) are documented openly. Disputes between our own reviewers are resolved by re-testing, not consensus. Read our complete affiliate disclosure, our about page, or browse our 2026 award winners to see this independence in action.

$0
Paid for placement
100%
Reviews peer-reviewed
14
Independent reviewers
9 yrs
Editorial track record

Frequently asked questions

The most common questions readers ask about our testing methodology and editorial standards.

Do brands pay you to be featured on Price & Pick?

No. We never accept payment for placement, ranking, or favorable coverage. The gold-winning Apple Watch Series 10 on our 2026 awards page earns roughly the same affiliate rate as products that didn't win at all. Brands cannot pay to appear in our reviews or modify our scoring.

We do earn affiliate commissions when readers click through to brand sites and complete a purchase — this is industry-standard for review sites. Crucially, commissions don't influence rankings. Read our complete affiliate disclosure for full transparency.

How long does each product testing cycle take?

Testing duration varies by category to match real-world use patterns. Tech products get 4-6 weeks (laptops, phones, headphones). Home & sleep products get 6-12 weeks (vacuums, mattresses, cookware). Beauty & skincare gets 4-8 weeks for results to show. Fashion gets 6-8 weeks with 30+ wears and 10+ washes.

The whole process from sourcing to publication typically spans 8-16 weeks. See the 6-stage process timeline above for stage-by-stage timing or check the category-specific methodology for details on each.

What if your findings disagree with The Verge, CNET, or Wirecutter?

We disagree pretty often. When our findings contradict consensus (e.g., the TCL QM7 outperforming pricier sets in our color accuracy testing), we say so openly and document why. Other outlets are sometimes constrained by ad relationships or shorter testing cycles.

That said, we cross-reference our findings with The Verge, CNET, Wirecutter, and Rtings to validate our methodology. When 3+ outlets disagree with us, we re-test before publishing. Browse our 2026 awards for examples of where we diverge.

Can I suggest a product for review?

Yes — we genuinely read every reader submission. Use the "Submit for Review" form on our contact page with the product name, brand, and a brief note on why you think it deserves coverage. We add roughly 200+ reader-submitted products to our annual testing pipeline.

Note: we cannot guarantee every submission gets reviewed. Niche products with limited availability or extremely high cost ($5,000+) may not fit our coverage. Browse our complete brand directory to see what we've already covered.

What happens after a product wins an award?

We don't walk away. Every winning product is re-checked at 6 months for long-term durability — products that fail post-publication get demoted with a transparent note. We publish 6-month and 12-month follow-up reviews (e.g., our AirPods Pro 2 long-term review) when meaningful updates emerge.

The awards page is updated quarterly to reflect long-term findings. If a winner shows quality drift, customer service issues, or is replaced by a newer, better product, the rankings update accordingly. Subscribe to our newsletter (below) to get notified of updates.

Do you keep the products you test?

No. Our reviewers buy products with editorial budget (most common) or test loaner units that are returned to manufacturers after testing. We never keep brand-supplied gifts, and our reviewers never accept personal gifts from brands.

This rule is non-negotiable. The independence principle is what makes our recommendations trustworthy — and that requires no personal financial benefit beyond our normal salary. Read our complete editorial standards.

How can I get notified when a new test publishes?

Subscribe to our newsletter (below) for weekly updates on new reviews, awards, and methodology changes. We send 1 email per week — never spam, never sold to anyone.

You can also follow us on social media (links in the footer below), browse our latest articles, or check our 2026 awards page which is updated quarterly. For specific category alerts (e.g., new mattress reviews), use the category subscription option in your account settings.

Get our weekly buying guides

Honest reviews and methodology updates from our 14-person editorial team. The best 10 picks delivered every Friday. No spam, ever.