For no know reason, google is starting to index https versions of my pages of my site that don’t exist. When you click on the search engine result link of one of the https, I get “This Connection is Untrusted” from firefox and saying my site has an invalid certificate. Any ideas on what can cause this?
According to my webhost, the files (html files) are likely in google’s servers and not in my webhost account. I noticed that the addresses showed there’s a folder in the home directory that also doesn’t exist. It appears that the folder that doesn’t exist in my webhost account contains the files that don’t exist in my webhost account, but apparently they do in google’s servers. How https fits into all this I don’t know but all pages using the non existent html files are https pages (that don’t exist). I used a Disallow in my Robots.txt and it stopped google from indexing the non existent files. Actually I did this months ago. Recently, I forgot why I put the disallow in and removed it thinking the problem was fixed by something I or my webhost did. Taking the Disallow out brought back indexing of the non existent files. I really don’t think that those files in google’s servers will ever go away.
One place Unprogrammed HTTPS pages Come From is accidental, malicious, or autogenerated SSL follders in your Home Directory (and maybe anyplace else). I found it and removed it. Let’s see what happens.