Skip to content

Conversation

@fabianmcg
Copy link
Contributor

@fabianmcg fabianmcg commented Dec 19, 2025

This patch replaces*(aliases) LLVM::LLVMPointerType by ptr::PtrType with #llvm.address_space as memory space.

Users shouldn't see any build issues from this change, as LLVMPointerType still exists, and can be used to construct LLVM pointers. The main difference is that LLVMPointerType now aliases to ptr::PtrType.
This also means that !llvm.ptr<x> is now an alias for !ptr.ptr<#llvm.address_space<x>>.

NOTE: Now the LLVM dialect depends in the ptr dialect
NOTE: This change doesn't modify the parsing/printing behavior of !llvm.ptr. Internally, !llvm.ptr alias to !ptr.ptr.

The churn in the tests is caused by !ptr.ptr using #ptr.spec for specifying the data layout compared to the old dense attribute. For assisting with changing the tests, I used claude opus 4.5 to create the following script to update the tests:
https://gist.github.com/fabianmcg/06b3d29cb31c2d6705ecb6e18bfb2a77

This patch depends in: #173091

Signed-off-by: Fabian Mora <fabian.mora-cordero@amd.com>
Signed-off-by: Fabian Mora <fabian.mora-cordero@amd.com>
@fabianmcg fabianmcg marked this pull request as ready for review December 19, 2025 20:29
@llvmbot
Copy link
Member

llvmbot commented Dec 19, 2025

@llvm/pr-subscribers-mlir-openacc
@llvm/pr-subscribers-mlir
@llvm/pr-subscribers-mlir-openmp

@llvm/pr-subscribers-mlir-llvm

Author: Fabian Mora (fabianmcg)

Changes

This patch replaces LLVM::LLVMPointerType by ptr::PtrType with #llvm.address_space as memory space.

Users shouldn't see any build issues from this change, as LLVMPointerType still exists, and can be used to construct LLVM pointers. The main difference is that LLVMPointerType now aliases to ptr::PtrType.
This also means that !llvm.ptr&lt;x&gt; is now an alias for !ptr.ptr&lt;#llvm.address_space&lt;x&gt;&gt;.

NOTE: Now the LLVM dialect depends in the ptr dialect
NOTE: This change doesn't modify the parsing/printing behavior of !llvm.ptr. Internally, !llvm.ptr alias to !ptr.ptr.

The churn in the tests is caused by !ptr.ptr using #ptr.spec for specifying the data layout compared to the old dense attribute. For assisting with changing the tests, I used claude opus 4.5 to create the following script to update the tests:
https://gist.github.com/fabianmcg/06b3d29cb31c2d6705ecb6e18bfb2a77


Patch is 260.56 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/173094.diff

69 Files Affected:

  • (modified) flang/test/Fir/CUDA/cuda-alloc-free.fir (+4-4)
  • (modified) flang/test/Fir/CUDA/cuda-allocate.fir (+2-2)
  • (modified) flang/test/Fir/CUDA/cuda-code-gen.mlir (+10-10)
  • (modified) flang/test/Fir/CUDA/cuda-constructor-2.f90 (+3-3)
  • (modified) flang/test/Fir/CUDA/cuda-data-transfer.fir (+5-6)
  • (modified) flang/test/Fir/CUDA/cuda-global-addr.mlir (+5-5)
  • (modified) flang/test/Fir/CUDA/cuda-gpu-launch-func.mlir (+28-28)
  • (modified) flang/test/Fir/CUDA/cuda-launch.fir (+5-5)
  • (modified) flang/test/Fir/CUDA/cuda-register-func.fir (+1-1)
  • (modified) flang/test/Fir/CUDA/cuda-shared-offset.mlir (+9-9)
  • (modified) flang/test/Fir/CUDA/cuda-shared-to-llvm.mlir (+1-1)
  • (modified) flang/test/Fir/CUDA/cuda-sync-desc.mlir (+1-1)
  • (modified) flang/test/Fir/MIF/co_broadcast.mlir (+3-3)
  • (modified) flang/test/Fir/MIF/co_max.mlir (+3-3)
  • (modified) flang/test/Fir/MIF/co_min.mlir (+3-3)
  • (modified) flang/test/Fir/MIF/co_sum.mlir (+2-2)
  • (modified) flang/test/Fir/MIF/init.mlir (+3-3)
  • (modified) flang/test/Fir/MIF/num_images.mlir (+1-1)
  • (modified) flang/test/Fir/MIF/sync_all.mlir (+1-1)
  • (modified) flang/test/Fir/MIF/sync_images.mlir (+9-9)
  • (modified) flang/test/Fir/MIF/sync_memory.mlir (+1-1)
  • (modified) flang/test/Fir/MIF/this_image.mlir (+1-1)
  • (modified) flang/test/Fir/OpenACC/openacc-mappable.fir (+1-1)
  • (modified) flang/test/Fir/convert-and-fold-insert-on-range.fir (+1-1)
  • (modified) flang/test/Transforms/debug-92391.fir (+1-1)
  • (modified) flang/test/Transforms/debug-associate-component.fir (+1-1)
  • (modified) flang/test/Transforms/debug-assumed-shape-array.fir (+1-1)
  • (modified) flang/test/Transforms/debug-derived-type-1.fir (+1-1)
  • (modified) flang/test/Transforms/debug-dummy-argument.fir (+1-1)
  • (modified) flang/test/Transforms/debug-local-global-storage-1.fir (+1-1)
  • (modified) flang/test/Transforms/loop-versioning.fir (+5-5)
  • (modified) flang/test/Transforms/tbaa-cray-pointer.fir (+1-2)
  • (modified) flang/test/Transforms/tbaa-derived-with-descriptor.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa-for-common-vars.fir (+9-9)
  • (modified) flang/test/Transforms/tbaa-for-global-equiv-vars.fir (+2-2)
  • (modified) flang/test/Transforms/tbaa-for-local-vars.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa-local-alloc-threshold.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa-target-inlined-results.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa-with-dummy-scope.fir (+2-2)
  • (modified) flang/test/Transforms/tbaa-with-dummy-scope2.fir (+2-2)
  • (modified) flang/test/Transforms/tbaa.fir (+5-5)
  • (modified) flang/test/Transforms/tbaa2.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa3.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa4.fir (+4-4)
  • (modified) mlir/include/mlir/Dialect/LLVMIR/LLVMDialect.td (+3)
  • (modified) mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h (+40-13)
  • (modified) mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.td (-35)
  • (modified) mlir/include/mlir/Target/LLVMIR/DataLayoutImporter.h (+2-2)
  • (modified) mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp (+4)
  • (modified) mlir/lib/Conversion/PtrToLLVM/PtrToLLVM.cpp (+9-5)
  • (modified) mlir/lib/Dialect/LLVMIR/CMakeLists.txt (+1)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp (+22-1)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/LLVMTypeSyntax.cpp (+19)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp (+11-154)
  • (modified) mlir/lib/Dialect/OpenACC/IR/OpenACC.cpp (+2-3)
  • (modified) mlir/lib/Dialect/OpenMP/IR/OpenMPDialect.cpp (+2-3)
  • (modified) mlir/lib/Dialect/Ptr/IR/PtrTypes.cpp (+1-1)
  • (modified) mlir/lib/Target/LLVMIR/DataLayoutImporter.cpp (+3-5)
  • (modified) mlir/test/Conversion/MemRefToLLVM/convert-dynamic-memref-ops.mlir (+2-2)
  • (modified) mlir/test/Dialect/LLVMIR/layout.mlir (+29-48)
  • (modified) mlir/test/Dialect/LLVMIR/mem2reg.mlir (+2-2)
  • (modified) mlir/test/Dialect/OpenMP/omp-offload-privatization-prepare-by-value.mlir (+1-1)
  • (modified) mlir/test/Dialect/OpenMP/omp-offload-privatization-prepare.mlir (+1-1)
  • (modified) mlir/test/Dialect/Ptr/layout.mlir (+2-2)
  • (modified) mlir/test/Target/LLVMIR/Import/data-layout.ll (+6-6)
  • (modified) mlir/test/Target/LLVMIR/data-layout.mlir (+2-2)
  • (modified) mlir/test/Target/LLVMIR/omptarget-atomic-capture-control-options.mlir (+1-1)
  • (modified) mlir/test/Target/LLVMIR/omptarget-atomic-update-control-options.mlir (+1-1)
  • (modified) mlir/test/Target/LLVMIR/omptarget-parallel-llvm-debug.mlir (+1-2)
diff --git a/flang/test/Fir/CUDA/cuda-alloc-free.fir b/flang/test/Fir/CUDA/cuda-alloc-free.fir
index 85313d78cc022..cd58b8baf11c4 100644
--- a/flang/test/Fir/CUDA/cuda-alloc-free.fir
+++ b/flang/test/Fir/CUDA/cuda-alloc-free.fir
@@ -1,6 +1,6 @@
 // RUN: fir-opt --cuf-convert %s | FileCheck %s
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<!llvm.ptr<271>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<!llvm.ptr<270>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
 
 func.func @_QPsub1() {
   %0 = cuf.alloc i32 {bindc_name = "idev", data_attr = #cuf.cuda<device>, uniq_name = "_QFsub1Eidev"} -> !fir.ref<i32>
@@ -25,7 +25,7 @@ func.func @_QPsub2() {
 
 // CHECK-LABEL: func.func @_QPsub2()
 // CHECK: %[[BYTES:.*]] = arith.muli %c10{{.*}}, %c4{{.*}} : index
-// CHECK: %[[CONV_BYTES:.*]] = fir.convert %[[BYTES]] : (index) -> i64 
+// CHECK: %[[CONV_BYTES:.*]] = fir.convert %[[BYTES]] : (index) -> i64
 // CHECK: %{{.*}} = fir.call @_FortranACUFMemAlloc(%[[CONV_BYTES]], %c0{{.*}}, %{{.*}}, %{{.*}}) {cuf.data_attr = #cuf.cuda<device>} : (i64, i32, !fir.ref<i8>, i32) -> !fir.llvm_ptr<i8>
 // CHECK: fir.call @_FortranACUFMemFree
 
@@ -53,7 +53,7 @@ func.func @_QPsub3(%arg0: !fir.ref<i32> {fir.bindc_name = "n"}, %arg1: !fir.ref<
 }
 
 // CHECK-LABEL: func.func @_QPsub3
-// CHECK: %[[N:.*]] = arith.select 
+// CHECK: %[[N:.*]] = arith.select
 // CHECK: %[[M:.*]] = arith.select
 // CHECK: %[[NBELEM:.*]] = arith.muli %[[N]], %[[M]] : index
 // CHECK: %[[BYTES:.*]] = arith.muli %[[NBELEM]], %c4{{.*}} : index
@@ -76,7 +76,7 @@ func.func @_QPtest_type() {
 gpu.module @cuda_device_mod {
   gpu.func @_QMalloc() kernel {
     %0 = cuf.alloc !fir.box<!fir.heap<!fir.array<?xf32>>> {bindc_name = "a", data_attr = #cuf.cuda<device>, uniq_name = "_QMallocEa"} -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xf32>>>>
-    gpu.return 
+    gpu.return
   }
 }
 
diff --git a/flang/test/Fir/CUDA/cuda-allocate.fir b/flang/test/Fir/CUDA/cuda-allocate.fir
index 5184561a03e67..07e316d2f92d7 100644
--- a/flang/test/Fir/CUDA/cuda-allocate.fir
+++ b/flang/test/Fir/CUDA/cuda-allocate.fir
@@ -1,6 +1,6 @@
 // RUN: fir-opt --cuf-convert %s | FileCheck %s
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<!llvm.ptr<271>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<!llvm.ptr<270>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
 
 func.func @_QPsub1() {
   %0 = cuf.alloc !fir.box<!fir.heap<!fir.array<?xf32>>> {bindc_name = "a", data_attr = #cuf.cuda<device>, uniq_name = "_QFsub1Ea"} -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xf32>>>>
@@ -177,7 +177,7 @@ func.func @_QQallocate_stream() {
   return
 }
 
-// CHECK-LABEL: func.func @_QQallocate_stream() 
+// CHECK-LABEL: func.func @_QQallocate_stream()
 // CHECK: %[[STREAM_ALLOCA:.*]] = fir.alloca i64 {bindc_name = "stream1", uniq_name = "_QFEstream1"}
 // CHECK: %[[STREAM:.*]] = fir.declare %[[STREAM_ALLOCA]] {uniq_name = "_QFEstream1"} : (!fir.ref<i64>) -> !fir.ref<i64>
 // CHECK: fir.call @_FortranACUFAllocatableAllocate(%{{.*}}, %[[STREAM]], %{{.*}}, %{{.*}}, %{{.*}}, %{{.*}}, %{{.*}}) : (!fir.ref<!fir.box<none>>, !fir.ref<i64>, !fir.ref<i1>, i1, !fir.box<none>, !fir.ref<i8>, i32) -> i32
diff --git a/flang/test/Fir/CUDA/cuda-code-gen.mlir b/flang/test/Fir/CUDA/cuda-code-gen.mlir
index e83648f21bdf1..9ceff24d5f5ca 100644
--- a/flang/test/Fir/CUDA/cuda-code-gen.mlir
+++ b/flang/test/Fir/CUDA/cuda-code-gen.mlir
@@ -1,6 +1,6 @@
 // RUN: fir-opt --split-input-file --fir-to-llvm-ir="target=x86_64-unknown-linux-gnu" %s | FileCheck %s
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<!llvm.ptr<271>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<!llvm.ptr<270>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
   func.func @_QQmain() attributes {fir.bindc_name = "cufkernel_global"} {
     %c0 = arith.constant 0 : index
     %0 = fir.address_of(@_QQclX3C737464696E3E00) : !fir.ref<!fir.char<1,8>>
@@ -18,7 +18,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> :
   }
 
   // CHECK-LABEL: llvm.func @_QQmain()
-  // CHECK-COUNT-2: llvm.call @_FortranACUFAllocDescriptor 
+  // CHECK-COUNT-2: llvm.call @_FortranACUFAllocDescriptor
 
   fir.global linkonce @_QQclX3C737464696E3E00 constant : !fir.char<1,8> {
     %0 = fir.string_lit "<stdin>\00"(8) : !fir.char<1,8>
@@ -29,7 +29,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> :
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = dense<64> : vector<4xi64>, !llvm.ptr<271> = dense<32> : vector<4xi64>, !llvm.ptr<270> = dense<32> : vector<4xi64>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, i1 = dense<8> : vector<2xi64>, !llvm.ptr = dense<64> : vector<4xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, !llvm.ptr<271> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, !llvm.ptr<270> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, i1 = dense<8> : vector<2xi64>, !llvm.ptr = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>} {
   func.func @_QQmain() attributes {fir.bindc_name = "test"} {
     %c10 = arith.constant 10 : index
     %c20 = arith.constant 20 : index
@@ -59,7 +59,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = dense<64> : vector<4xi64>, !llvm.ptr<271> = dense<32> : vector<4xi64>, !llvm.ptr<270> = dense<32> : vector<4xi64>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, i1 = dense<8> : vector<2xi64>, !llvm.ptr = dense<64> : vector<4xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, !llvm.ptr<271> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, !llvm.ptr<270> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, i1 = dense<8> : vector<2xi64>, !llvm.ptr = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>} {
   func.func @_QQmain() attributes {fir.bindc_name = "p1"} {
     %c1_i32 = arith.constant 1 : i32
     %c0_i32 = arith.constant 0 : i32
@@ -129,7 +129,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = dense<32> : vector<4xi64>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = dense<64> : vector<4xi64>, !llvm.ptr<271> = dense<32> : vector<4xi64>, f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, !llvm.ptr = dense<64> : vector<4xi64>, i1 = dense<8> : vector<2xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>, fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", gpu.container_module, llvm.data_layout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", llvm.ident = "flang version 20.0.0 (git@github.com:clementval/llvm-project.git efc2415bcce8e8a9e73e77aa122c8aba1c1fbbd2)", llvm.target_triple = "x86_64-unknown-linux-gnu"} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, !llvm.ptr<271> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, !llvm.ptr = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, i1 = dense<8> : vector<2xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>, fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", gpu.container_module, llvm.data_layout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", llvm.ident = "flang version 20.0.0 (git@github.com:clementval/llvm-project.git efc2415bcce8e8a9e73e77aa122c8aba1c1fbbd2)", llvm.target_triple = "x86_64-unknown-linux-gnu"} {
   func.func @_QQmain() {
     %c1_i32 = arith.constant 1 : i32
     %c2 = arith.constant 2 : index
@@ -173,7 +173,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = dense<32> : vec
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = dense<32> : vector<4xi64>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = dense<64> : vector<4xi64>, !llvm.ptr<271> = dense<32> : vector<4xi64>, f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, !llvm.ptr = dense<64> : vector<4xi64>, i1 = dense<8> : vector<2xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>, fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", gpu.container_module, llvm.data_layout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", llvm.ident = "flang version 20.0.0 (git@github.com:clementval/llvm-project.git efc2415bcce8e8a9e73e77aa122c8aba1c1fbbd2)", llvm.target_triple = "x86_64-unknown-linux-gnu"} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, !llvm.ptr<271> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, !llvm.ptr = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, i1 = dense<8> : vector<2xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>, fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", gpu.container_module, llvm.data_layout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", llvm.ident = "flang version 20.0.0 (git@github.com:clementval/llvm-project.git efc2415bcce8e8a9e73e77aa122c8aba1c1fbbd2)", llvm.target_triple = "x86_64-unknown-linux-gnu"} {
   func.func @_QPouter(%arg0: !fir.ref<!fir.array<100x100xf64>> {cuf.data_attr = #cuf.cuda<device>, fir.bindc_name = "a"}) {
     %c0_i32 = arith.constant 0 : i32
     %c100 = arith.constant 100 : index
@@ -207,7 +207,7 @@ fir.global common @_QPshared_static__shared_mem__(dense<0> : vector<28xi8>) {ali
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<!llvm.ptr<271>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<!llvm.ptr<270>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
   func.func @_QQmain() attributes {fir.bindc_name = "cufkernel_global"} {
     %c0 = arith.constant 0 : index
     %3 = fir.call @__tgt_acc_get_deviceptr() : () -> !fir.ref<!fir.box<none>>
@@ -224,7 +224,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> :
 
 // -----
 
-module attributes {gpu.container_module, dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {gpu.container_module, dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Dec 19, 2025

@llvm/pr-subscribers-openacc

Author: Fabian Mora (fabianmcg)

Changes

This patch replaces LLVM::LLVMPointerType by ptr::PtrType with #llvm.address_space as memory space.

Users shouldn't see any build issues from this change, as LLVMPointerType still exists, and can be used to construct LLVM pointers. The main difference is that LLVMPointerType now aliases to ptr::PtrType.
This also means that !llvm.ptr&lt;x&gt; is now an alias for !ptr.ptr&lt;#llvm.address_space&lt;x&gt;&gt;.

NOTE: Now the LLVM dialect depends in the ptr dialect
NOTE: This change doesn't modify the parsing/printing behavior of !llvm.ptr. Internally, !llvm.ptr alias to !ptr.ptr.

The churn in the tests is caused by !ptr.ptr using #ptr.spec for specifying the data layout compared to the old dense attribute. For assisting with changing the tests, I used claude opus 4.5 to create the following script to update the tests:
https://gist.github.com/fabianmcg/06b3d29cb31c2d6705ecb6e18bfb2a77


Patch is 260.56 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/173094.diff

69 Files Affected:

  • (modified) flang/test/Fir/CUDA/cuda-alloc-free.fir (+4-4)
  • (modified) flang/test/Fir/CUDA/cuda-allocate.fir (+2-2)
  • (modified) flang/test/Fir/CUDA/cuda-code-gen.mlir (+10-10)
  • (modified) flang/test/Fir/CUDA/cuda-constructor-2.f90 (+3-3)
  • (modified) flang/test/Fir/CUDA/cuda-data-transfer.fir (+5-6)
  • (modified) flang/test/Fir/CUDA/cuda-global-addr.mlir (+5-5)
  • (modified) flang/test/Fir/CUDA/cuda-gpu-launch-func.mlir (+28-28)
  • (modified) flang/test/Fir/CUDA/cuda-launch.fir (+5-5)
  • (modified) flang/test/Fir/CUDA/cuda-register-func.fir (+1-1)
  • (modified) flang/test/Fir/CUDA/cuda-shared-offset.mlir (+9-9)
  • (modified) flang/test/Fir/CUDA/cuda-shared-to-llvm.mlir (+1-1)
  • (modified) flang/test/Fir/CUDA/cuda-sync-desc.mlir (+1-1)
  • (modified) flang/test/Fir/MIF/co_broadcast.mlir (+3-3)
  • (modified) flang/test/Fir/MIF/co_max.mlir (+3-3)
  • (modified) flang/test/Fir/MIF/co_min.mlir (+3-3)
  • (modified) flang/test/Fir/MIF/co_sum.mlir (+2-2)
  • (modified) flang/test/Fir/MIF/init.mlir (+3-3)
  • (modified) flang/test/Fir/MIF/num_images.mlir (+1-1)
  • (modified) flang/test/Fir/MIF/sync_all.mlir (+1-1)
  • (modified) flang/test/Fir/MIF/sync_images.mlir (+9-9)
  • (modified) flang/test/Fir/MIF/sync_memory.mlir (+1-1)
  • (modified) flang/test/Fir/MIF/this_image.mlir (+1-1)
  • (modified) flang/test/Fir/OpenACC/openacc-mappable.fir (+1-1)
  • (modified) flang/test/Fir/convert-and-fold-insert-on-range.fir (+1-1)
  • (modified) flang/test/Transforms/debug-92391.fir (+1-1)
  • (modified) flang/test/Transforms/debug-associate-component.fir (+1-1)
  • (modified) flang/test/Transforms/debug-assumed-shape-array.fir (+1-1)
  • (modified) flang/test/Transforms/debug-derived-type-1.fir (+1-1)
  • (modified) flang/test/Transforms/debug-dummy-argument.fir (+1-1)
  • (modified) flang/test/Transforms/debug-local-global-storage-1.fir (+1-1)
  • (modified) flang/test/Transforms/loop-versioning.fir (+5-5)
  • (modified) flang/test/Transforms/tbaa-cray-pointer.fir (+1-2)
  • (modified) flang/test/Transforms/tbaa-derived-with-descriptor.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa-for-common-vars.fir (+9-9)
  • (modified) flang/test/Transforms/tbaa-for-global-equiv-vars.fir (+2-2)
  • (modified) flang/test/Transforms/tbaa-for-local-vars.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa-local-alloc-threshold.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa-target-inlined-results.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa-with-dummy-scope.fir (+2-2)
  • (modified) flang/test/Transforms/tbaa-with-dummy-scope2.fir (+2-2)
  • (modified) flang/test/Transforms/tbaa.fir (+5-5)
  • (modified) flang/test/Transforms/tbaa2.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa3.fir (+1-1)
  • (modified) flang/test/Transforms/tbaa4.fir (+4-4)
  • (modified) mlir/include/mlir/Dialect/LLVMIR/LLVMDialect.td (+3)
  • (modified) mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h (+40-13)
  • (modified) mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.td (-35)
  • (modified) mlir/include/mlir/Target/LLVMIR/DataLayoutImporter.h (+2-2)
  • (modified) mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp (+4)
  • (modified) mlir/lib/Conversion/PtrToLLVM/PtrToLLVM.cpp (+9-5)
  • (modified) mlir/lib/Dialect/LLVMIR/CMakeLists.txt (+1)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp (+22-1)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/LLVMTypeSyntax.cpp (+19)
  • (modified) mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp (+11-154)
  • (modified) mlir/lib/Dialect/OpenACC/IR/OpenACC.cpp (+2-3)
  • (modified) mlir/lib/Dialect/OpenMP/IR/OpenMPDialect.cpp (+2-3)
  • (modified) mlir/lib/Dialect/Ptr/IR/PtrTypes.cpp (+1-1)
  • (modified) mlir/lib/Target/LLVMIR/DataLayoutImporter.cpp (+3-5)
  • (modified) mlir/test/Conversion/MemRefToLLVM/convert-dynamic-memref-ops.mlir (+2-2)
  • (modified) mlir/test/Dialect/LLVMIR/layout.mlir (+29-48)
  • (modified) mlir/test/Dialect/LLVMIR/mem2reg.mlir (+2-2)
  • (modified) mlir/test/Dialect/OpenMP/omp-offload-privatization-prepare-by-value.mlir (+1-1)
  • (modified) mlir/test/Dialect/OpenMP/omp-offload-privatization-prepare.mlir (+1-1)
  • (modified) mlir/test/Dialect/Ptr/layout.mlir (+2-2)
  • (modified) mlir/test/Target/LLVMIR/Import/data-layout.ll (+6-6)
  • (modified) mlir/test/Target/LLVMIR/data-layout.mlir (+2-2)
  • (modified) mlir/test/Target/LLVMIR/omptarget-atomic-capture-control-options.mlir (+1-1)
  • (modified) mlir/test/Target/LLVMIR/omptarget-atomic-update-control-options.mlir (+1-1)
  • (modified) mlir/test/Target/LLVMIR/omptarget-parallel-llvm-debug.mlir (+1-2)
diff --git a/flang/test/Fir/CUDA/cuda-alloc-free.fir b/flang/test/Fir/CUDA/cuda-alloc-free.fir
index 85313d78cc022..cd58b8baf11c4 100644
--- a/flang/test/Fir/CUDA/cuda-alloc-free.fir
+++ b/flang/test/Fir/CUDA/cuda-alloc-free.fir
@@ -1,6 +1,6 @@
 // RUN: fir-opt --cuf-convert %s | FileCheck %s
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<!llvm.ptr<271>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<!llvm.ptr<270>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
 
 func.func @_QPsub1() {
   %0 = cuf.alloc i32 {bindc_name = "idev", data_attr = #cuf.cuda<device>, uniq_name = "_QFsub1Eidev"} -> !fir.ref<i32>
@@ -25,7 +25,7 @@ func.func @_QPsub2() {
 
 // CHECK-LABEL: func.func @_QPsub2()
 // CHECK: %[[BYTES:.*]] = arith.muli %c10{{.*}}, %c4{{.*}} : index
-// CHECK: %[[CONV_BYTES:.*]] = fir.convert %[[BYTES]] : (index) -> i64 
+// CHECK: %[[CONV_BYTES:.*]] = fir.convert %[[BYTES]] : (index) -> i64
 // CHECK: %{{.*}} = fir.call @_FortranACUFMemAlloc(%[[CONV_BYTES]], %c0{{.*}}, %{{.*}}, %{{.*}}) {cuf.data_attr = #cuf.cuda<device>} : (i64, i32, !fir.ref<i8>, i32) -> !fir.llvm_ptr<i8>
 // CHECK: fir.call @_FortranACUFMemFree
 
@@ -53,7 +53,7 @@ func.func @_QPsub3(%arg0: !fir.ref<i32> {fir.bindc_name = "n"}, %arg1: !fir.ref<
 }
 
 // CHECK-LABEL: func.func @_QPsub3
-// CHECK: %[[N:.*]] = arith.select 
+// CHECK: %[[N:.*]] = arith.select
 // CHECK: %[[M:.*]] = arith.select
 // CHECK: %[[NBELEM:.*]] = arith.muli %[[N]], %[[M]] : index
 // CHECK: %[[BYTES:.*]] = arith.muli %[[NBELEM]], %c4{{.*}} : index
@@ -76,7 +76,7 @@ func.func @_QPtest_type() {
 gpu.module @cuda_device_mod {
   gpu.func @_QMalloc() kernel {
     %0 = cuf.alloc !fir.box<!fir.heap<!fir.array<?xf32>>> {bindc_name = "a", data_attr = #cuf.cuda<device>, uniq_name = "_QMallocEa"} -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xf32>>>>
-    gpu.return 
+    gpu.return
   }
 }
 
diff --git a/flang/test/Fir/CUDA/cuda-allocate.fir b/flang/test/Fir/CUDA/cuda-allocate.fir
index 5184561a03e67..07e316d2f92d7 100644
--- a/flang/test/Fir/CUDA/cuda-allocate.fir
+++ b/flang/test/Fir/CUDA/cuda-allocate.fir
@@ -1,6 +1,6 @@
 // RUN: fir-opt --cuf-convert %s | FileCheck %s
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<!llvm.ptr<271>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<!llvm.ptr<270>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
 
 func.func @_QPsub1() {
   %0 = cuf.alloc !fir.box<!fir.heap<!fir.array<?xf32>>> {bindc_name = "a", data_attr = #cuf.cuda<device>, uniq_name = "_QFsub1Ea"} -> !fir.ref<!fir.box<!fir.heap<!fir.array<?xf32>>>>
@@ -177,7 +177,7 @@ func.func @_QQallocate_stream() {
   return
 }
 
-// CHECK-LABEL: func.func @_QQallocate_stream() 
+// CHECK-LABEL: func.func @_QQallocate_stream()
 // CHECK: %[[STREAM_ALLOCA:.*]] = fir.alloca i64 {bindc_name = "stream1", uniq_name = "_QFEstream1"}
 // CHECK: %[[STREAM:.*]] = fir.declare %[[STREAM_ALLOCA]] {uniq_name = "_QFEstream1"} : (!fir.ref<i64>) -> !fir.ref<i64>
 // CHECK: fir.call @_FortranACUFAllocatableAllocate(%{{.*}}, %[[STREAM]], %{{.*}}, %{{.*}}, %{{.*}}, %{{.*}}, %{{.*}}) : (!fir.ref<!fir.box<none>>, !fir.ref<i64>, !fir.ref<i1>, i1, !fir.box<none>, !fir.ref<i8>, i32) -> i32
diff --git a/flang/test/Fir/CUDA/cuda-code-gen.mlir b/flang/test/Fir/CUDA/cuda-code-gen.mlir
index e83648f21bdf1..9ceff24d5f5ca 100644
--- a/flang/test/Fir/CUDA/cuda-code-gen.mlir
+++ b/flang/test/Fir/CUDA/cuda-code-gen.mlir
@@ -1,6 +1,6 @@
 // RUN: fir-opt --split-input-file --fir-to-llvm-ir="target=x86_64-unknown-linux-gnu" %s | FileCheck %s
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<!llvm.ptr<271>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<!llvm.ptr<270>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
   func.func @_QQmain() attributes {fir.bindc_name = "cufkernel_global"} {
     %c0 = arith.constant 0 : index
     %0 = fir.address_of(@_QQclX3C737464696E3E00) : !fir.ref<!fir.char<1,8>>
@@ -18,7 +18,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> :
   }
 
   // CHECK-LABEL: llvm.func @_QQmain()
-  // CHECK-COUNT-2: llvm.call @_FortranACUFAllocDescriptor 
+  // CHECK-COUNT-2: llvm.call @_FortranACUFAllocDescriptor
 
   fir.global linkonce @_QQclX3C737464696E3E00 constant : !fir.char<1,8> {
     %0 = fir.string_lit "<stdin>\00"(8) : !fir.char<1,8>
@@ -29,7 +29,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> :
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = dense<64> : vector<4xi64>, !llvm.ptr<271> = dense<32> : vector<4xi64>, !llvm.ptr<270> = dense<32> : vector<4xi64>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, i1 = dense<8> : vector<2xi64>, !llvm.ptr = dense<64> : vector<4xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, !llvm.ptr<271> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, !llvm.ptr<270> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, i1 = dense<8> : vector<2xi64>, !llvm.ptr = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>} {
   func.func @_QQmain() attributes {fir.bindc_name = "test"} {
     %c10 = arith.constant 10 : index
     %c20 = arith.constant 20 : index
@@ -59,7 +59,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = dense<64> : vector<4xi64>, !llvm.ptr<271> = dense<32> : vector<4xi64>, !llvm.ptr<270> = dense<32> : vector<4xi64>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, i1 = dense<8> : vector<2xi64>, !llvm.ptr = dense<64> : vector<4xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, !llvm.ptr<271> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, !llvm.ptr<270> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, i1 = dense<8> : vector<2xi64>, !llvm.ptr = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>} {
   func.func @_QQmain() attributes {fir.bindc_name = "p1"} {
     %c1_i32 = arith.constant 1 : i32
     %c0_i32 = arith.constant 0 : i32
@@ -129,7 +129,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<f80 = dense<128> : vector<2xi64>
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = dense<32> : vector<4xi64>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = dense<64> : vector<4xi64>, !llvm.ptr<271> = dense<32> : vector<4xi64>, f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, !llvm.ptr = dense<64> : vector<4xi64>, i1 = dense<8> : vector<2xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>, fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", gpu.container_module, llvm.data_layout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", llvm.ident = "flang version 20.0.0 (git@github.com:clementval/llvm-project.git efc2415bcce8e8a9e73e77aa122c8aba1c1fbbd2)", llvm.target_triple = "x86_64-unknown-linux-gnu"} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, !llvm.ptr<271> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, !llvm.ptr = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, i1 = dense<8> : vector<2xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>, fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", gpu.container_module, llvm.data_layout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", llvm.ident = "flang version 20.0.0 (git@github.com:clementval/llvm-project.git efc2415bcce8e8a9e73e77aa122c8aba1c1fbbd2)", llvm.target_triple = "x86_64-unknown-linux-gnu"} {
   func.func @_QQmain() {
     %c1_i32 = arith.constant 1 : i32
     %c2 = arith.constant 2 : index
@@ -173,7 +173,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = dense<32> : vec
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = dense<32> : vector<4xi64>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = dense<64> : vector<4xi64>, !llvm.ptr<271> = dense<32> : vector<4xi64>, f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, !llvm.ptr = dense<64> : vector<4xi64>, i1 = dense<8> : vector<2xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>, fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", gpu.container_module, llvm.data_layout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", llvm.ident = "flang version 20.0.0 (git@github.com:clementval/llvm-project.git efc2415bcce8e8a9e73e77aa122c8aba1c1fbbd2)", llvm.target_triple = "x86_64-unknown-linux-gnu"} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<!llvm.ptr<270> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f128 = dense<128> : vector<2xi64>, f64 = dense<64> : vector<2xi64>, f16 = dense<16> : vector<2xi64>, i32 = dense<32> : vector<2xi64>, i64 = dense<64> : vector<2xi64>, !llvm.ptr<272> = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, !llvm.ptr<271> = #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>, f80 = dense<128> : vector<2xi64>, i128 = dense<128> : vector<2xi64>, i16 = dense<16> : vector<2xi64>, i8 = dense<8> : vector<2xi64>, !llvm.ptr = #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>, i1 = dense<8> : vector<2xi64>, "dlti.endianness" = "little", "dlti.stack_alignment" = 128 : i64>, fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", gpu.container_module, llvm.data_layout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128", llvm.ident = "flang version 20.0.0 (git@github.com:clementval/llvm-project.git efc2415bcce8e8a9e73e77aa122c8aba1c1fbbd2)", llvm.target_triple = "x86_64-unknown-linux-gnu"} {
   func.func @_QPouter(%arg0: !fir.ref<!fir.array<100x100xf64>> {cuf.data_attr = #cuf.cuda<device>, fir.bindc_name = "a"}) {
     %c0_i32 = arith.constant 0 : i32
     %c100 = arith.constant 100 : index
@@ -207,7 +207,7 @@ fir.global common @_QPshared_static__shared_mem__(dense<0> : vector<28xi8>) {ali
 
 // -----
 
-module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<!llvm.ptr<271>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<!llvm.ptr<270>, #ptr.spec<size = 32, abi = 32, preferred = 32, index = 32>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, #ptr.spec<size = 64, abi = 64, preferred = 64, index = 64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
   func.func @_QQmain() attributes {fir.bindc_name = "cufkernel_global"} {
     %c0 = arith.constant 0 : index
     %3 = fir.call @__tgt_acc_get_deviceptr() : () -> !fir.ref<!fir.box<none>>
@@ -224,7 +224,7 @@ module attributes {dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> :
 
 // -----
 
-module attributes {gpu.container_module, dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>>, #dlti.dl_entry<i128, dense<128> : vector<2xi64>>, #dlti.dl_entry<i64, dense<64> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr<272>, dense<64> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<271>, dense<32> : vector<4xi64>>, #dlti.dl_entry<!llvm.ptr<270>, dense<32> : vector<4xi64>>, #dlti.dl_entry<f128, dense<128> : vector<2xi64>>, #dlti.dl_entry<f64, dense<64> : vector<2xi64>>, #dlti.dl_entry<f16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i32, dense<32> : vector<2xi64>>, #dlti.dl_entry<i16, dense<16> : vector<2xi64>>, #dlti.dl_entry<i8, dense<8> : vector<2xi64>>, #dlti.dl_entry<i1, dense<8> : vector<2xi64>>, #dlti.dl_entry<!llvm.ptr, dense<64> : vector<4xi64>>, #dlti.dl_entry<"dlti.endianness", "little">, #dlti.dl_entry<"dlti.stack_alignment", 128 : i64>>} {
+module attributes {gpu.container_module, dlti.dl_spec = #dlti.dl_spec<#dlti.dl_entry<f80, dense<128> : vector<2xi64>...
[truncated]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants