-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Providing a consistent IO method for all kinds of tensors #211
Comments
Currently the converted t = randn(ℂ^2 ← ℂ^2)
obj = convert(Dict, t) Output:
I suggest to add one more entry |
Converting to It is definitely also true that the The question is what kind of behaviour you want. Also an Is it really important that a I am definitely open to suggestions resulting from actual use cases. I find coming up with the correct interface always the most difficult part of such questions. |
In the imaginary time evolution algorithms I'm currently working on, there are many operations of absorbing/removing the bond weights into/from the PEPS tensors. My past experience with Python is that if the multiplication with weights is done as usual (with all the zeros in the dense format), the speed can be much slower compared to the code optimized for diagonal matrices. In addition, the conversion code on README may need some update. using JLD2
filename = "choose_some_filename.jld2"
t_dict = jldload(filename)
T = eltype(valtype(t_dict[:data]))
t = TensorMap{T}(undef, t_dict[:space])
for ((f₁, f₂), val) in t_dict[:data]
t[f₁, f₂] .= val
end
|
Thanks; these are valuable bug reports. Do you feel comfortable in starting to prepare a PR to fix those? I am tied up in correcting exams until at least Monday afternoon, so it will probably not be before Tuesday that I have some time to fix this. |
I did some further testing, and found that # define the following in both old and new environments
using TensorKit
using JLD2
save_tensor(filename::AbstractString, t::AbstractTensorMap) = save_object(filename, convert(Dict, t))
load_tensor(filename::AbstractString) = convert(TensorMap, load_object(filename))
name_old = "old.jld2"
name_new = "new.jld2"
# in old environment containing TensorKit v0.12.7
stype = FermionParity ⊠ U1Irrep ⊠ SU2Irrep
V1 = ℂ[stype]((0,2,1) => 2, (1,-1,1//2) => 3, (0,-2,0) => 2)
V2 = ℂ[stype]((0,0,1) => 3, (1,3,1//2) => 2, (0,-2,1) => 4)
a = TensorMap(randn, Float64, V1 ⊗ V1', V2' ⊗ V2);
save_tensor(name_old, a)
# in new environment containing TensorKit v0.14.3
save_tensor(name_new, load_tensor(name_old))
# back to old environment
println(a == load_tensor(name_new)) # output `true` This seems to demonstrate that the (unordered) Dict produced by new TensorKit is the same to the old one. Look at the source code (which is unchanged from v0.12.7 to v0.14.3): function Base.convert(::Type{Dict}, t::AbstractTensorMap)
d = Dict{Symbol,Any}()
d[:codomain] = repr(codomain(t))
d[:domain] = repr(domain(t))
data = Dict{String,Any}()
# the export explicitly saves which block is which
for (c, b) in blocks(t)
data[repr(c)] = Array(b)
end
d[:data] = data
return d
end Although the iteration order of On the other hand, the structure of fusion trees has changed from v0.12.7 to newer versions, and the old fusion trees exported by v0.12.7 cannot be rebuild in newer versions. Consider the simplest example of "trivial" bosonic tensors without any symmetry: # v0.14.3
using TensorKit
a = rand(ℂ^2 ← ℂ^3)
fusiontrees(a)
#= output:
1-element Vector{Tuple{FusionTree{Trivial, 1, 0, 0}, FusionTree{Trivial, 1, 0, 0}}}:
(FusionTree{Trivial}((Trivial(),), Trivial(), (false,), ()), FusionTree{Trivial}((Trivial(),), Trivial(), (false,), ()))
=#
# v0.12.7
a = TensorMap(randn, Float64, ℂ^2, ℂ^3)
collect(fusiontrees(a))
#= output:
1-element Vector{Tuple{Nothing, Nothing}}:
(nothing, nothing)
=# The new TensorKit cannot build the fusion tree from |
For the IO of DiagonalTensorMap, I can add a simple function that converts a dense diagonal TensorMap, while all the IO still uses the generic TensorMap format. |
Just putting this out there: that test does not actually check if the tensor is still compatible, in the sense that there might be basis transformations within the block that change the interpretation of the data, without actually altering the entries. Basically, tensors are basis dependent, so you cannot tell by looking at the entries if they are the same. A better way to test this would be for example to construct two random tensors in an arbitrary partition, along with their overlap. (Best with some form permutation as well). Since the overlap is a scalar, it is basis independent, and should thus remain the same after loading it in in the new format. All this being said, I'm actually struggling to come up with an explicit example where the internal ordering of the |
The order in the block should be the iteration order, so it should have changed. Let me try to come up with an example. |
On TensorKit 0.12.7 julia> V = Vect[FibonacciAnyon](:I=>1,:τ=>1)
julia> t = TensorMap(randn, V * V * V * V, V * V * V)
julia> sort(collect(pairs(t.rowr[FibonacciAnyon(:I)])); by = first ∘ last)
13-element Vector{Pair{FusionTree{FibonacciAnyon, 4, 2, 3, Nothing}, UnitRange{Int64}}}:
FusionTree{FibonacciAnyon}((:I, :I, :I, :I), :I, (false, false, false, false), (:I, :I)) => 1:1
FusionTree{FibonacciAnyon}((:τ, :τ, :I, :I), :I, (false, false, false, false), (:I, :I)) => 2:2
FusionTree{FibonacciAnyon}((:τ, :I, :τ, :I), :I, (false, false, false, false), (:τ, :I)) => 3:3
FusionTree{FibonacciAnyon}((:I, :τ, :τ, :I), :I, (false, false, false, false), (:τ, :I)) => 4:4
FusionTree{FibonacciAnyon}((:τ, :τ, :τ, :I), :I, (false, false, false, false), (:τ, :I)) => 5:5
FusionTree{FibonacciAnyon}((:τ, :I, :I, :τ), :I, (false, false, false, false), (:τ, :τ)) => 6:6
FusionTree{FibonacciAnyon}((:I, :τ, :I, :τ), :I, (false, false, false, false), (:τ, :τ)) => 7:7
FusionTree{FibonacciAnyon}((:τ, :τ, :I, :τ), :I, (false, false, false, false), (:τ, :τ)) => 8:8
FusionTree{FibonacciAnyon}((:I, :I, :τ, :τ), :I, (false, false, false, false), (:I, :τ)) => 9:9
FusionTree{FibonacciAnyon}((:τ, :I, :τ, :τ), :I, (false, false, false, false), (:τ, :τ)) => 10:10
FusionTree{FibonacciAnyon}((:I, :τ, :τ, :τ), :I, (false, false, false, false), (:τ, :τ)) => 11:11
FusionTree{FibonacciAnyon}((:τ, :τ, :τ, :τ), :I, (false, false, false, false), (:I, :τ)) => 12:12
FusionTree{FibonacciAnyon}((:τ, :τ, :τ, :τ), :I, (false, false, false, false), (:τ, :τ)) => 13:13 On TensorKit v0.14 V = Vect[FibonacciAnyon](:I=>1,:τ=>1)
julia> collect(enumerate(getindex.(TensorKit.fusionblockstructure(V * V * V *V ← V * V * V).fusiontreelist[1:13], 1)))
13-element Vector{Tuple{Int64, FusionTree{FibonacciAnyon, 4, 2, 3}}}:
(1, FusionTree{FibonacciAnyon}((:I, :I, :I, :I), :I, (false, false, false, false), (:I, :I)))
(2, FusionTree{FibonacciAnyon}((:τ, :τ, :I, :I), :I, (false, false, false, false), (:I, :I)))
(3, FusionTree{FibonacciAnyon}((:τ, :I, :τ, :I), :I, (false, false, false, false), (:τ, :I)))
(4, FusionTree{FibonacciAnyon}((:I, :τ, :τ, :I), :I, (false, false, false, false), (:τ, :I)))
(5, FusionTree{FibonacciAnyon}((:τ, :τ, :τ, :I), :I, (false, false, false, false), (:τ, :I)))
(6, FusionTree{FibonacciAnyon}((:τ, :I, :I, :τ), :I, (false, false, false, false), (:τ, :τ)))
(7, FusionTree{FibonacciAnyon}((:I, :τ, :I, :τ), :I, (false, false, false, false), (:τ, :τ)))
(8, FusionTree{FibonacciAnyon}((:τ, :τ, :I, :τ), :I, (false, false, false, false), (:τ, :τ)))
(9, FusionTree{FibonacciAnyon}((:I, :I, :τ, :τ), :I, (false, false, false, false), (:I, :τ)))
(10, FusionTree{FibonacciAnyon}((:τ, :τ, :τ, :τ), :I, (false, false, false, false), (:I, :τ)))
(11, FusionTree{FibonacciAnyon}((:τ, :I, :τ, :τ), :I, (false, false, false, false), (:τ, :τ)))
(12, FusionTree{FibonacciAnyon}((:I, :τ, :τ, :τ), :I, (false, false, false, false), (:τ, :τ)))
(13, FusionTree{FibonacciAnyon}((:τ, :τ, :τ, :τ), :I, (false, false, false, false), (:τ, :τ))) Rows 10 and 12 have changed. |
It would be great if TensorKit provides out-of-box functions to save all kinds of tensors to disk and load it back (just like PyTorch's
save
andload
). The issue I'm currently running into is the IO of aDiagonalTensorMap
. For ordinaryTensorMap
s, I can use the following functions for IO (which, at least for ordinaryTensorMap
s, actually works for both 0.12.7 and newer versions of TensorKit):In addition, there is not a convenient conversion from a diagonal
TensorMap
back toDiagonalTensorMap
. Then I cannot easily load a saved tensor back as aDiagonalTensorMap
. I guess there is a similar problem for some other special kind of tensors.The text was updated successfully, but these errors were encountered: