AMD at present (and I fully agree with this stance) is not enabling DXR (DirectX Ray) Support until it is something that can be used by their entire GPU Range.
While NVIDIA might be content with restricting RTX to their Mainstream / Mid-Range and above Hardware., which given the price increase from £250 to £350 means actually a larger number of gamers have been priced out of the Mainstream Market … AMD instead want to ensure that such is available to everyone before enabling it.
Of course there is absolutely nothing stopping Developers (never has been) from using Radeon Rays to support Ray-Tracing on AMD/GCN Hardware since it was publicly released in Mid-2016.
That very little was done with the technology in Games, even by Independent Developers until NVIDIA released RTX honestly should tell you everything you need to know as to why AMD is really dragging their feet on adding supporting.
See, until AMD do enable it... this means it isn't an "Universal" Feature and thus most Developers won't support it on that basis. Those that currently do support Ray-Tracing (Based) Features., are essentially those within NVIDIA's RTX Partner Program; and honestly are receiving some form of kickback such-as Free Hardware.
Now beyond this., while don't get me wrong here... while Hardware Accelerated Ray Processing will become a Standard GPU Feature; that doesn't mean that Ray-Tracing / Path-Tracing (Radeon ProRender / Rays v2.0 onward supports Real-Time Path-Tracing) is strictly speaking where Real-Time Engines are headed.
AMD (well ATI) themselves were early adopters and heavily contributed to IBPM (Image Based Photon Mapping) in Real-Time Graphics... this was eventually dropped in favour of Forward+ Rendering, because most Development Studios simply had no interested in such an approach to Global Illumination and Lighting Mapping; given it was a "New" and more "Difficult" approach that required some changes in Workflow Pipelines.
The emergence of PBR (from Disney/PIXAR) essentially allowed for 'similar' results again with minimal disruption to established workflows or engine design.
In many ways it's why NVIDIA introduced RTX the way they have, as basically a Black-Box "Plug-n-Play" akin to their other Gameworks Kits. Heck, take a look at how many Studios today even create their own In-House Engines today; most of the big Studios use a Single Engine (Frostbite, Dunia, Crystal, CryEngine, Unreal Engine, Snowdrop, Glass, Resident) across all of their projects; so provided AMD/NVIDIA can sway the Studio who produce and maintain said Engine., then all other projects will essentially be idea for their Hardware.
Now as weird as it might sound., while AMD holds out... NVIDIA really can't get the support they NEED for RTX.
Because in terms of Ray-Tracing; NVIDIA Hardware is better., because well they have a Custom Designed Core, which takes approx. 30% of the GPU DEDICATED to just that task. Where-as while with AMD Hardware, sure if you're not doing anything else (say Blender Rendering for example) it's better; but when we're talking about Game Engines that have to handle half-a-dozen other tasks … well that's where it suffers because GCN/RDNA Cores are sharing said workload, meaning sure it can handle faster / more Rays but that comes at the cost of say as many Polygons, or Shaded Surfaces, etc.
These are handled via separate Cores on Turing., so there's no fighting for processing resources.
AMD Hardware ends up better if an engine is well optimised and tuned because of being able to Share Resources w/o having to do duplicate workloads., but the fact is MOST Studios are never going to take this approach; as they want to simply "Enhance" (and for them ideally with a "Raytrace Enable / Disabled" Flag) their classic approaches.
Personally speaking,. I think AMD should keep Ray-Tracing Support exclusive for their Professional Rendering Pipelines; i.e. Max / Maya / Blender / etc. while for Real-Time Engines, heavily nudge and return to their IBPM Research; as such an approach is more accurate to how Light actually works (as opposed to a Brute-Force Hack, which Ray-Tracing is) … but more than that is, it's scalable enough to work on Hardware as Low-End as the Athlon 300-Series to obviously the RX 5700 XT (or TBA, RX 5900 XT) providing very similar if not better results than RTX can currently deliver even on an RTX 2080 Ti / Titan.
And I'm not saying that out of some misguided loyalism, or belief that AMD has more powerful hardware... but from the simple fact that with an All-in-One Solution., the Number and Frequency of the CU in Navi will simply be able to process more than the RT Cores in Turing, with it's Hybrid Solution.
All DXR offers, just like RTX is a Hybrid Solution though... it really can't be used (at least not yet, and I'd wager a lot of this has A LOT to do with NVIDIA's influence on it's Development as an API) as a Dedicated Solution.
Even if it could / does in the future; NVIDIA's Hardware isn't a Dedicated Solution itself., and with it being the most popular Hardware means; neither will future implementations / engines be designed that way... as NVIDIA will not allow themselves to be in the same position AMD was with GCN on DirectX 10/11; given they have the Finances and Influence to forcibly prevent that.
Thus it will remain (for the foreseeable future, even if/when the PS5 has Ray-Tracing Support) to literally keep filibustering it on PC.