Performance testing tools like Google PageSpeed Insights (PSI) and GTmetrix provide valuable insights into your website’s speed and user experience. However, interpreting these results correctly is crucial to making effective optimizations. This guide will help you understand what these tools measure, what the metrics mean, and how to identify and address performance issues.
This is a long and detailed article, the Table of Contents will help you navigate to sections relevant to your specific questions or needs.
Understanding Synthetic vs. Real-World Performance
Before diving into specific metrics, it’s important to understand what performance testing tools actually measure and their limitations.
What Synthetic Tests Measure
PageSpeed Insights and GTmetrix are synthetic testing tools. They measure performance in simulated, controlled conditions:
- Tests run from specific geographic locations
- Use standardized device and network simulations
- Test with an empty browser cache (unless specified otherwise)
- Run in controlled browser environments
- Measure performance at a single moment in time
Why Performance Test Scores Don’t Always Reflect Actual User Experience
Synthetic test scores often don’t match what your real visitors experience because:
- Geographic variation: Your actual visitors may be closer or farther from your server than the test location
- Device diversity: Real users have varying device capabilities, not just the simulated device
- Network conditions: Actual network speeds vary widely; tests use throttled but consistent connections
- Cache state: Return visitors benefit from cached resources; synthetic tests typically start with empty cache
- Server response variations: Server performance can fluctuate based on current load, time of day, and background processes
- Third-party script behavior: External scripts (ads, analytics, social media) can perform differently in synthetic tests vs. real conditions
When These Tools Are Actually Useful
Despite their limitations, PSI and GTmetrix are valuable for:
- Establishing baselines: Measure performance before making changes
- Comparing before/after: Evaluate the impact of specific optimizations
- Identifying technical issues: Discover render-blocking resources, oversized images, inefficient code
- Competitive benchmarking: Compare your site against competitors under identical conditions
- Catching regressions: Regular testing can alert you to performance degradations
Real User Monitoring
For a complete picture of performance, combine synthetic testing with Real User Monitoring (RUM):
- Google Analytics 4 Core Web Vitals report
- Google Search Console Core Web Vitals report
- Third-party RUM tools (if needed)
Real user data shows how actual visitors experience your site across diverse conditions, devices, and locations.
Best Practices for Testing
Testing Methodology
To get reliable, actionable results:
- Run multiple tests: Execute 3-5 tests and look at averages or median values, not single results
- Test at different times: Performance can vary by time of day due to server load or traffic patterns
- Compare similar conditions: Use the same test location and device simulation when comparing results
- Test key pages: Don’t just test your homepage; test important landing pages, product pages, and high-traffic content
- Clear cache appropriately: When testing optimizations, clear cache to ensure you’re measuring actual changes
- Document changes: Keep notes on what you changed so you can identify what helped (or hurt)
What to Focus On
When reviewing test results:
- Trends over time rather than fixating on absolute scores
- Before/after comparisons when making specific changes
- Metrics that impact user experience, especially Core Web Vitals
- Real-world user data from Google Analytics or Search Console when available
- Specific diagnostics that identify concrete issues to fix
Common Testing Pitfalls
Avoid these mistakes:
- Score obsession: A score of 95 vs. 98 doesn’t meaningfully impact user experience
- Homepage-only testing: Your homepage may be well-optimized while other pages struggle
- Ignoring variation: Test scores naturally fluctuate; one bad result doesn’t mean something broke
- Synthetic-only reliance: Don’t ignore real user metrics in favor of synthetic tests
- Too many changes at once: Make incremental changes so you can identify what actually helped
- Wrong metrics priority: Not all metrics equally impact user experience
Core Web Vitals and Key Metrics
Waterfall view is available in GTmetrix but not PSI. Where it is mentioned as a diagnostic option in the information below, consider testing your site with GTmetrix if needed.
Time to First Byte (TTFB)
What it measures:
TTFB measures the time from when the browser requests a page until it receives the first byte of the response. This includes network latency, DNS lookup, connection time, and server processing time.
Target: Under 600ms (acceptable), under 300ms (good)
Common culprits:
- Slow server processing (inefficient code, complex database queries, plugin overhead)
- Lack of page caching or object caching
- Server resource constraints (high load, insufficient resources)
- Network latency (distance between user and server)
- DNS lookup time
- Excessive redirects
How to identify the cause:
- Check the “Reduce initial server response time” diagnostic
- Review server processing time in the waterfall chart (the purple “waiting” section for the HTML document, available in GTmetrix but not PSI)
- Compare TTFB across multiple pages (consistent issues suggest systematic problems)
- Test from different locations to isolate network vs. server issues
Optimization options:
- Enable page caching: Pressable’s built-in caching is already optimized for this
- Optimize database queries: Identify and fix slow queries using Query Monitor
- Reduce plugin overhead: Audit plugins for efficiency, remove unnecessary ones
- Use object caching: Pressable provides an object cache for database query results to reduce repeated processing
- Ensure adequate PHP workers: Check for worker saturation during traffic peaks
- Review
admin-ajax.phpusage: Excessive AJAX requests can slow down responses - Minimize redirects: Reduce unnecessary HTTP redirects
- Use a CDN: Reduce distance between users and content (though TTFB measures the initial HTML, not CDN assets); Pressable’s edge cache serves as a CDN
Server Response Time (Initial Server Response Time)
What it actually measures:
This metric shows the time it takes for the server to process your request and begin sending a response. It includes database queries, PHP execution, plugin and theme processing, and server overhead.
The phrase “slow server response time” is frequently misunderstood. This metric refers to what your site’s code is doing, not the server hardware or capacity. The server itself is ready and waiting, but WordPress, your theme, and your plugins need time to process the request, run database queries, and generate the HTML before the server can begin responding.
Think of it this way: the server is a fast chef waiting for your recipe (WordPress + plugins + theme) to finish executing. If your recipe requires gathering ingredients from 50 different locations and preparing them in a complex way, the chef has to wait before serving the meal, no matter how skilled they are.
Common culprits:
- Inefficient or unoptimized database queries (especially uncached or repeated queries)
- Heavy plugin processing during page generation
- Unoptimized theme functions that run on every page load
- External API calls made during page generation (fetching data from third-party services)
Admin-ajax.phpusage on the frontend- Too many plugins running code on every page load, even when not needed
- Complex or nested loops in theme templates
- Autoloaded options in the database (especially large serialized data); optimize autoloaded data
How to identify the cause:
- Review the “Reduce initial server response time” diagnostic in PSI
- Check the waterfall chart for the initial HTML document request; the purple “waiting” time shows server processing (waterfall view is available in GTmetrix, not PSI)
- Install Query Monitor plugin to identify slow database queries and see what’s running on each page
- Review PHP error logs for warnings or notices that indicate inefficient code
- Look for
admin-ajax.phpcalls in the waterfall chart (these shouldn’t typically run on frontend page loads) - Test pages with different content to see if specific page types or content triggers slow responses
Optimization options:
- Enable and utilize object caching: Cache database query results to eliminate repeated queries (Pressable provides object caching automatically)
- Identify and optimize slow queries: Use Query Monitor to find queries taking over 0.1 seconds, then optimize them
- Remove or replace inefficient plugins: Audit plugins with Query Monitor to see which ones are doing heavy processing
- Defer non-critical operations: Move heavy processing to AJAX requests after page load or to background tasks (cron jobs using WP Cron)
Reduce admin-ajax.phpusage on frontend: Many plugins useadmin-ajax.phpunnecessarily; find alternatives- Optimize or remove external API calls: Don’t fetch data from external services during page generation; cache results or load via JavaScript
- Use transient caching: Cache expensive operations (like complex calculations or API responses) using WordPress transients
- Review and optimize theme functions: Look for inefficient loops, queries in templates, or functions running on every page unnecessarily
- Conditional loading: Only load plugins and features on pages where they’re actually needed (use conditional logic)
- Optimize autoloaded data: Review autoloaded options in the database; large autoloaded data slows every page load
On Pressable specifically:
- Pressable’s page caching already handles caching the final HTML output for repeat visitors
- Object caching is available and should be actively utilized for database query results
- PHP worker configuration is fixed at 5 workers with 512MB RAM each (or 10 workers on grandfathered accounts)
- Workers can burst up to 110 depending on server pool availability
- Check if PHP worker saturation is occurring during peak traffic (contact support if suspected)
- Server response time issues are almost always code-related, not server capacity problems
- NGINX and PHP configurations cannot be adjusted, so focus on code and query optimization
Distinguishing server issues from code issues:
- If response time is high but inconsistent: Likely code-related, such as uncached queries or external API calls
- If response time is consistently high across all pages: Investigate systematic issues like all plugins loading everywhere, theme overhead, or large autoloaded data
- If response time spikes during traffic: May indicate PHP worker saturation or database resource constraints under load
- Server-level hardware issues: These would typically affect multiple sites on the server, not just one site; if you suspect this, contact support
Remember: when you see “slow server response time” in test results, the first place to look is your code (plugins, theme, queries), not the server itself.
Largest Contentful Paint (LCP)
What it measures:
LCP measures how long it takes for the largest visible element in the viewport to render. This is typically a hero image, heading, or large text block. It represents when the main content becomes visible to users.
Target: Under 2.5 seconds (good), 2.5-4.0 seconds (needs improvement), over 4.0 seconds (poor)
Common culprits:
- Large, unoptimized images (wrong format, oversized dimensions, uncompressed)
- Slow server response times (high TTFB)
- Render-blocking JavaScript and CSS preventing content display
- Slow resource load times (large files, slow CDN, network issues)
- Images not prioritized for loading (missing preload hints)
How to identify the cause:
- Check the Diagnostics section for which element is the LCP element
- Review the network waterfall for the LCP resource to see how long it takes to load (waterfall view is available in GTmetrix, not PSI)
- Examine TTFB in the metrics panel (if TTFB is high, the server is delaying everything – this is usually due to slow queries, bloated autoload, or simply too many plugins trying to run code)
- Look for render-blocking resources that delay LCP element rendering
- Check image file size and format in the diagnostics
Optimization options:
- Optimize images: Compress images, convert to WebP format, ensure appropriate dimensions (don’t serve 3000px images when 800px would suffice)
- Implement lazy loading: Load below-the-fold images only when needed (but DON’T lazy load the LCP image)
- Use a CDN: Deliver assets from locations closer to users (Pressable’s edge cache functions as a CDN)
- Preload critical resources: Use
<link rel="preload">for LCP images or fonts - Minimize server processing time: Optimize database queries, reduce plugin overhead, improve code efficiency
- Reduce render-blocking resources: Defer non-critical JavaScript, inline critical CSS (examples of plugins that can do this include Autoptimize and WP-Optimize)
First Input Delay (FID) / Interaction to Next Paint (INP)
What it measures:
FID measures the time from when a user first interacts with your page (clicks a link, taps a button) to when the browser can actually respond to that interaction. INP is the newer metric replacing FID, measuring responsiveness throughout the entire page lifecycle, not just the first interaction.
Target: FID under 100ms (good), INP under 200ms (good)
Common culprits:
- Heavy JavaScript execution blocking the main thread
- Large JavaScript bundles that take time to parse and execute
- Third-party scripts (analytics, advertising, chat widgets, social media embeds)
- Poorly optimized event handlers or listeners
- Long-running tasks preventing the browser from responding to input
How to identify the cause:
- Review the Total Blocking Time metric (high TBT usually correlates with poor FID/INP)
- Check JavaScript execution time in the diagnostics section
- Examine third-party script impact (PSI shows third-party code separately)
- Look for scripts loading synchronously in the
<head>withoutdeferorasync - Review the waterfall chart for large JavaScript files (waterfall view is available in GTmetrix, not PSI)
Optimization options:
- Defer non-critical JavaScript: Move scripts to load after page content (many optimization plugins offer this feature)
- Remove or lazy-load third-party scripts: Audit and remove unnecessary third-party code, or delay loading until user interaction
- Audit plugins: Many WordPress plugins load JavaScript on every page even when not needed
- Optimize event handlers: Reduce JavaScript execution time, debounce expensive operations
- Split large JavaScript bundles: Use code splitting to load only what’s needed for each page
Cumulative Layout Shift (CLS)
What it measures:
CLS measures visual stability by tracking unexpected layout shifts during page load. A layout shift occurs when visible elements move from their initial position. This creates a jarring experience for users (like clicking a button that moves right before you tap it).
Target: Under 0.1 (good), 0.1-0.25 (needs improvement), over 0.25 (poor)
Common culprits:
- Images, videos, or iframes without explicit width and height attributes
- Ads, embeds, or other injected content without reserved space
- Web fonts causing FOIT (Flash of Invisible Text) or FOUT (Flash of Unstyled Text)
- Dynamically injected content that pushes existing content down
- Elements that change size after loading (like carousels or animations)
How to identify the cause:
- PSI highlights specific shifting elements in the diagnostics
- Review the “Avoid large layout shifts” diagnostic for affected elements
- Check for images and embeds without width/height attributes in your HTML
- Look for content injected via JavaScript after page load
- Test with font loading disabled to see if fonts are causing shifts
Optimization options:
- Add explicit dimensions: Set width and height attributes on all images, videos, and iframes
- Reserve space for dynamic content: Use CSS
aspect-ratioormin-heightfor ads, embeds, and other dynamic elements - Optimize font loading: Use
font-display: swaporfont-display: optional, or preload critical fonts - Set dimensions on ad slots: Ensure advertising spaces have defined dimensions before ads load
- Avoid inserting content above existing content: Don’t inject banners, notices, or other content that pushes page content down
- Use CSS transforms: Animate with
transformandopacity(which don’t trigger layout) instead of properties like top, width, or margin
First Contentful Paint (FCP)
What it measures:
FCP measures the time from navigation start until the browser renders the first bit of content (text, images, SVG, or canvas elements). It represents the first visual feedback that the page is loading.
Target: Under 1.8 seconds (good), 1.8-3.0 seconds (needs improvement), over 3.0 seconds (poor)
Common culprits:
- Slow Time to First Byte (server response delay)
- Render-blocking CSS and JavaScript in the
<head> - Large resources needed for above-the-fold content
- Inefficient critical rendering path
- Slow DNS resolution or connection times
Optimization options:
- Optimize server response time: Improve TTFB through caching, code optimization, and efficient queries
- Inline critical CSS: Include essential above-the-fold styles directly in HTML
- Defer non-critical CSS: Load below-the-fold styles asynchronously
- Minimize render-blocking JavaScript: Move scripts to the footer, use defer/async attributes, or consider an optimization plugin that can defer render-blocking resources
- Reduce initial payload: Minimize the size of the initial HTML document
- Use resource hints: Implement
preconnect,dns-prefetch, orpreloadfor critical resources
Total Blocking Time (TBT)
What it measures:
TBT measures the total amount of time between First Contentful Paint and Time to Interactive where the main thread was blocked for long enough to prevent input responsiveness. Essentially, it’s the sum of time your page is unresponsive to user input during load.
Target: Under 200ms (good), 200-600ms (needs improvement), over 600ms (poor)
Common culprits:
- Long-running JavaScript tasks (over 50ms)
- Heavy third-party scripts
- Inefficient code execution
- Large JavaScript bundles being parsed and executed
- Synchronous script loading
Optimization options:
- Defer or remove unused JavaScript: Audit and eliminate unnecessary scripts
- Optimize third-party script loading: Delay third-party scripts until after page interactive, or remove if not essential
- Use code splitting: Load only the JavaScript needed for the current page
- Implement lazy loading for scripts: Load scripts on interaction rather than on page load
- Break up long tasks: Divide JavaScript tasks into smaller chunks (under 50ms each)
Speed Index
What it measures:
Speed Index measures how quickly content is visually displayed during page load. It captures the visual progression of page load and calculates how quickly the page contents are visually populated. Lower scores are better.
Target: Under 3.4 seconds (good), 3.4-5.8 seconds (needs improvement), over 5.8 seconds (poor)
Common culprits:
- Render-blocking CSS and JavaScript
- Large above-the-fold content that loads slowly
- Slow resource delivery (large files, slow CDN)
- Images that load sequentially rather than in parallel
- Excessive JavaScript execution during initial render
Optimization options:
- Prioritize visible content: Ensure above-the-fold resources load first
- Lazy load below-the-fold images: Only load images as they’re about to enter the viewport
- Optimize critical rendering path: Inline critical CSS, defer non-critical resources
- Reduce render-blocking resources: Minimize blocking scripts and stylesheets
- Optimize resource loading: Use HTTP/2, compress files, leverage browser caching
Performance Score
The overall Performance Score (0-100) is a weighted average of multiple metrics, with Core Web Vitals receiving the highest weight. The scoring breakdown in Lighthouse 10+ is:
- Total Blocking Time: 30%
- Largest Contentful Paint: 25%
- Cumulative Layout Shift: 25%
- First Contentful Paint: 10%
- Speed Index: 10%
Important notes:
- Scores can fluctuate between tests due to variation in network conditions, server load, and test environment
- Focus on individual metrics and specific issues rather than obsessing over the overall score
- A score of 90-100 is considered good, but improving from 95 to 98 doesn’t meaningfully impact user experience
- Different pages will have different scores; test key pages, not just your homepage
Understanding the Waterfall Chart
The waterfall chart (available in GTmetrix and browser developer tools, but not directly in PSI) provides a chronological visualization of every resource request your page makes. It’s one of the most valuable diagnostic tools for understanding performance.
What the Waterfall Shows
- Chronological visualization: Each row represents a resource (HTML, CSS, JS, images, fonts, etc.) loaded in order
- Request timing and duration: Horizontal bars show how long each resource takes from request to completion
- Resource dependencies: See which resources must wait for others to load first
- Request phases: Different colors represent different phases (DNS, connection, waiting, download)
Key Components to Examine
DNS Lookup (typically green or teal):
- Time required to resolve a domain name to an IP address
- Multiple DNS lookups indicate requests to multiple domains (your site, CDN, third-party services)
Optimization: Reduce the number of third-party domains, use dns-prefetch for critical external domains
Connection Time (typically orange):
- Time to establish a TCP connection to the server
- Includes SSL/TLS negotiation for HTTPS connections
- Long connection times for many domains impact performance
Optimization: Limit the number of hosts from which your site loads content, use preconnect for critical third-party origins
Waiting Time (typically purple or yellow):
- Time spent waiting for the server to respond (similar to TTFB for that resource)
- Long waiting times indicate server-side processing issues or slow dynamic resource generation
Optimization: Improve server processing, enable caching, optimize database queries, use a CDN for static assets (Pressable’s edge cache acts as a CDN)
Content Download (typically blue):
- Time to download the actual resource after the server responds
- Large files show long blue bars
- Slow downloads indicate network speed issues or oversized files
Optimization: Compress resources (gzip/brotli), use appropriate image formats (WebP), implement a CDN (Pressable’s edge cache acts as a CDN), reduce file sizes
Analyzing the Waterfall
Request blocking and dependencies:
- Render-blocking resources: Resources early in the waterfall with long bars that prevent page rendering
- Sequential loading: Look for “staircase” patterns where resources load one after another instead of in parallel
- Opportunity for parallelization: Modern browsers can load many resources simultaneously; identify chains that could load in parallel
Third-party resources:
- Identify external domains: Any domain that isn’t your primary site or CDN
- Assess impact: Third-party scripts can significantly delay page load
- Common culprits: Analytics (Google Analytics, Facebook Pixel), advertising networks, social media embeds, chat widgets, A/B testing tools
- Action: Consider removing, deferring, or replacing problematic third-party scripts
Resource size:
- Wide bars indicate large files: Look for unusually long download times
- Check compression: Verify resources are served with brotli or gzip compression (Pressable enables Brotli compression at the server level by default; if a browser that doesn’t support or blocks Brotli is used, then gzip compression is used as the fallback)
- Verify appropriate formats: Images should use modern formats (WebP), JavaScript and CSS should be minified
- Right-size images: Ensure images aren’t significantly larger than needed for display
Request count:
- High request counts impact performance: Even small files create overhead when you have hundreds of requests
- Look for opportunities to combine: In some cases, combining resources can help (though HTTP/2 makes this less critical)
- Consider HTTP/2 benefits: Modern servers, such as those used by Pressable, use HTTP/2 multiplexing to load multiple resources over a single connection
- Eliminate unnecessary requests: Audit and remove unused scripts, styles, and images
Caching:
- Cached resources: Resources loaded from browser cache will show as “(from cache)” or have very short durations
- Verify cache headers: Static assets should have long cache lifetimes (1 year for versioned assets)
- Check cache hit rates: Repeat views should load most static resources from cache
- On Pressable: Page caching is handled automatically; ensure you’re not breaking cache with query strings or dynamic content
Example Waterfall Charts:



Common Waterfall Patterns and What They Mean
Long initial HTML wait time:
- Indicates server processing issues (see Server Response Time section above)
- First resource to load; if this is slow, everything else is delayed
Render-blocking CSS loading:
- Stylesheet loading early in waterfall with long duration
- Blocks page rendering until loaded
Solution: Inline critical CSS, defer non-critical styles
JavaScript blocking progression:
- Scripts loading in
<head>withoutdeferorasync - Creates gaps in the waterfall where nothing else loads
Solution: Move scripts to footer, add defer or async attributes
Third-party script cascades:
- One third-party script loads, which then loads more scripts
- Creates long dependency chains
Solution: Defer third-party scripts, load on interaction, or remove if not essential
Images loading serially:
- Images loading one after another instead of in parallel
- May indicate lazy loading being too aggressive or browser limits being hit
Solution: Allow above-the-fold images to load normally, lazy load below-the-fold
Many small requests:
- Hundreds of tiny file requests
- Common with icon fonts loading individual icons or excessive image sprites
Solution: Combine where appropriate, consider SVG sprites, use icon fonts efficiently
Additional Metrics and Diagnostics
Opportunities Section
The Opportunities section provides specific, actionable recommendations with estimated time savings. These are ordered by potential impact.
How to use Opportunities:
- Prioritize high-impact items: Start with opportunities showing the largest time savings
- Consider implementation effort: Balance potential savings against difficulty of implementation
- Focus on wins within your control: Some opportunities (like third-party script performance) may be outside your control
Common opportunities and how to address them on Pressable:
- Properly size images: Serve images at appropriate dimensions; don’t serve 3000px images when 800px would work
- Serve images in next-gen formats: Convert images to WebP (supported by all modern browsers)
- Efficiently encode images: Compress images without sacrificing visual quality
- Eliminate render-blocking resources: Defer CSS/JS or inline critical resources
- Reduce unused JavaScript/CSS: Remove code that isn’t needed on the current page
- Minify CSS/JavaScript: Ensure files are minified (most caching plugins handle this)
- Enable text compression: Verify brotli/gzip is enabled (handled by Pressable’s NGINX configuration)
- Reduce server response times: Optimize code and queries (see Server Response Time section)
Diagnostics Section
The Diagnostics section provides additional information about performance characteristics that may not directly impact scores but offer optimization insights.
Common diagnostics:
- Avoid enormous network payloads: Total page weight is too large (over 1600KB is flagged)
- Serve static assets with an efficient cache policy: Resources should have long cache lifetimes
- Avoid large layout shifts: Identifies elements causing CLS issues
- Image elements do not have explicit width and height: Can cause layout shifts
- Reduce the impact of third-party code: Third-party scripts are slowing down the page
- Minimize main-thread work: Too much JavaScript execution on the main thread
- Reduce JavaScript execution time: Scripts take too long to parse, compile, and execute
- Avoid long main-thread tasks: JavaScript tasks blocking the thread for over 50ms
- Minimize DOM size: Page has excessive DOM (document object model) nodes (over 1500 is flagged as excessive) – this can be common with page builders
- User Timing marks and measures: Shows custom performance marks if you’ve implemented them
Tool-Specific Differences
PageSpeed Insights
What it provides:
- Uses Lighthouse testing engine under the hood
- Provides both lab data (synthetic, simulated testing) and field data (real users via Chrome User Experience Report)
- Tests both mobile and desktop versions
- Focuses heavily on Core Web Vitals
- Provides Core Web Vitals assessment (pass/fail) based on field data
Strengths:
- Free and easy to use
- Official Google tool (aligns with Search ranking factors)
- Real user data when available (28-day aggregate from Chrome users)
- Updated regularly with latest best practices
Limitations:
- No waterfall chart (must use browser DevTools or GTmetrix)
- Limited historical tracking (only shows current results)
- Can’t choose test location
- Field data only available if site has sufficient Chrome user traffic
GTmetrix
What it provides:
- Detailed waterfall chart showing all resource requests
- Multiple test location options (servers in different geographic regions)
- Historical tracking with free account (track performance over time)
- More granular resource analysis
- Video playback of page load process
- Ability to test with different browsers and connection speeds
Strengths:
- Excellent waterfall visualization
- Historical data tracking
- Geographic testing options
- More detailed technical information
Limitations:
- Scoring differs from PSI/Lighthouse
- No real user data (field data)
- Free tier limits number of tests
- Uses Lighthouse for some metrics but adds proprietary scores
Which Tool to Use
Use PageSpeed Insights when:
- You want to align with Google Search ranking factors
- You need real user data (field data) for your site
- You want a quick, simple performance overview
- You’re checking Core Web Vitals assessment
Use GTmetrix when:
- You need detailed waterfall analysis
- You want to test from different geographic locations
- You need historical tracking and comparison
- You want video playback of page load
- You’re doing detailed technical diagnosis
Use both when:
- You want comprehensive analysis from multiple perspectives
- You’re doing major optimization work
- Different tools highlight different issues
Pressable-Specific Considerations
Caching Limitations
Pressable provides optimized caching at the platform level, which impacts how you should approach performance optimization.
Built-in caching:
- Page caching: Automatically caches final HTML output for repeat visitors
- Object caching: Available for caching database query results
- Browser caching: Proper cache headers set for static assets
Critical consideration:
- Object caching and page caching use
object-cache.phpandadvanced-cache.phpfiles - These files are symlinked into every Pressable site as read-only
- Third-party caching plugins cannot use these files
- Installing additional caching plugins may actually degrade performance rather than improve it
What this means:
- Don’t install caching plugins solely for their cache features (however, other optimization features should work perfectly at Pressable)
- Don’t expect third-party cache plugins to provide caching functionality (they can’t access the required drop-in files)
- Pressable’s caching is already optimized; focus optimization efforts elsewhere such as confirming cache coverage is not degraded by conflicting cookies or code
Server Configuration
Pressable’s server configuration is optimized and fixed, which impacts what optimization strategies are available.
Fixed configurations:
- NGINX configuration cannot be modified
- PHP.ini settings cannot be changed
- PHP worker count and memory limits are set (5 workers with 512MB RAM each, or 10 workers on grandfathered accounts)
- PHP execution time limits are fixed
- See Pressable’s PHP settings documentation for full details
What this means:
- Can’t adjust PHP memory limits if a plugin requests more, but 512 MB per PHP worker should be more than enough
- Can’t change PHP execution time for long-running processes
- Can’t modify NGINX rewrites or redirects at server level
- Must work within existing resource constraints
Worker burst capacity:
- Standard sites can burst up to 110 PHP workers depending on server pool availability
- Burst capacity shares resources across the server pool
- Sustained high traffic may need plan upgrades rather than configuration changes
Optimization Focus Areas for Pressable
Given the platform constraints, focus your optimization efforts on:
Database query optimization:
- Use Query Monitor to identify slow queries
- Optimize or eliminate queries taking over 0.1 seconds
- Implement object caching for expensive queries
- Add proper database indexes where needed
Plugin efficiency and selection:
- Audit plugins for performance impact
- Remove plugins that aren’t essential
- Choose lightweight alternatives where possible
- Disable plugin features you’re not using
Asset optimization:
- Compress and optimize images (WebP format, appropriate dimensions)
- Minify CSS and JavaScript
- Remove unused CSS and JavaScript
- Implement lazy loading for images
Third-party script management:
- Audit and remove unnecessary third-party scripts
- Defer third-party scripts until after page load
- Consider removing or replacing heavy scripts (social media embeds, advertising)
Theme efficiency:
- Choose well-coded, performance-focused themes
- Optimize or remove theme features you’re not using
- Review template files for inefficient queries or loops
When to Contact Support
Contact Pressable support when:
- Persistent high TTFB despite code optimization efforts (consistently over 1 second across all pages)
- Suspected PHP worker saturation during normal traffic levels
- Platform-level issues affecting performance that are outside your control
- Clarification needed on platform limitations or capabilities
- Migration-related performance concerns
Pressable support cannot assist with:
- Plugin-specific issues (contact plugin developer)
- Theme optimization questions (contact theme developer)
- Requests to change fixed server configurations
Conclusion
Performance testing tools like PageSpeed Insights and GTmetrix are valuable diagnostic instruments, but they’re not the final word on your site’s actual user experience. Use them as part of a broader performance strategy:
- Understand the limitations: Synthetic tests don’t reflect real-world conditions perfectly
- Focus on metrics that matter: Core Web Vitals directly impact user experience
- Combine with real user data: Use Google Analytics and Search Console for actual user metrics
- Make incremental changes: Test one optimization at a time to understand impact
- Prioritize high-impact fixes: Start with issues that offer the biggest improvement for the least effort
- Test representative pages: Don’t just optimize your homepage; test key landing pages and high-traffic content
- Monitor trends over time: Track performance over weeks and months, not just single tests
The goal isn’t a perfect score; the goal is a fast, responsive, stable experience for your real users. A site with a 85 performance score but great real user metrics is better than a site with a 98 score that real users experience as slow.
Performance optimization is an iterative process. Start with the most impactful changes, measure results, and continue refining. On Pressable specifically, focus on code efficiency, database optimization, and asset management, since server-level configurations are already optimized and fixed.
For additional guidance on WordPress performance optimization, refer to Pressable’s comprehensive knowledge base for platform-specific best practices.