I'm working on developing a service that serves visitors in most cities in my country. The visitors are redirected to a subdomain, corresponding to their city (city.domain.com).
Suppose, I want to let visitors use the service from the root domain of my site, being domain.com, for a smoother experience, but still use subdomains for geo-location (it's a Russian site and Yandex has a thing for regional subdomains, because it can only allocate 25 cities to a domain) and indexing.
- the content to be indexed would be at city.domain.com/page-name/
- the spiders would still see city.domain.com/page-name/ and will be redirected through a server-side 301 redirect from domain.com/city/page-name/ (but this won't affect humans, since their page address is dependent on the History API)
- there would be rel="canonical" at domain.com/city/page-name/, pointing at city.domain.com/page-name/ (and, possibly, meta noindex for all domain.com pages).
The only thing different in pages that robots and visitors end up on is page URL, not page content (aside from rel="canonical" at the main domain).
2. Would only using rel=canonical (and, maybe, noindex) without the 301 server-side redirect from domain.com/city/ to city.domain.com be sufficient to prevent duplicate content issues?
P.S. And yeah, I've read quite a bunch of articles and viewed a number of videos on the subject. Unfortunately, neither Yandex, nor Google promise to adhere to cross-domain rel="canonical" 100%.
Giving mixed signals like this is not going to deliver consistent results over time. I'd strongly recommend finding one site structure and using it consistently.