Why does Google need access to CSS and JS files?
Google has been providing better rankings for user-friendly websites, fast speeds, good user experience, and more. In order to determine the user experience of a website, Google requires access rights to access the website'sCSScap (a poem)JavaScript filesThe
By default, WordPresswon't stopSearch bots access any CSS or JS files. However, some webmasters may accidentally block them when trying to add a firewall or use a WordPress security plugin.
This will limit Googlebot's ability to index CSS and JS files, which may affect the site'sSEO PerformanceThe
How do I find these files and unblock them?
How to grant Google access to your CSS and JS files
First, know which files on your website Google can't access.
This can be done by clicking on the Google Search Console (formerly known as Webmaster Tools) in the"Crawl" ""Crawl as Google""to see how Googlebot views the site. Next, click on "Page"button (to perform this action on desktop and mobile devices).
Once fetched, the results are displayed in the line below. You can see the pages that are not indexed for various reasons.
Clicking on each resource will display a link to the actual resource that Googlebot cannot access.
Most of the time, these are WordPress Pluginor CSS styles and JS files added by the theme.
Now need to edit the site'srobots.txt fileThis file controls what the Google bots see. It can be edited by connecting to the website using an FTP client or control panel. The robots.txt file is located in the root directory of the website.
If you use theAll in One SEOplugin, then the robots.txt file can be edited within the WordPress admin area. Go to
All in One SEO " Toolspage, and then click "Robots.txt editor" tab is sufficient.
Then, enable custom robots.txt with the toggle switch. this allows you to edit the robots.txt file.
After that, you will see a preview of the existing robots.txt file at the bottom of the screen.
You can now add your own custom rules to the robots.txt file.
When you are done, save the robots.txt file. Visit "Crawl as Google"tool and click on the "Web Pages" button. Now compare the crawl results and the problem of most resources being blocked should now be gone.