Defining Julia element-wise exponentiation of matrices for integer exponents

27Feb14

This comes following Michael Croucher’s post MATLAB: Repeated multiplication is faster than integer powers.

He noticed that in MATLAB doing element-wise exponentiation of matrices (using .^ operator) was slower than explicitly (element-wisely) multiplying the matrix by itself. I’m pretty sure this is only true for small exponents, anyway. He also wondered how was it in other languages.

Well, I did my job and tried it in Julia:

function testpow()
    #% function to compare integer powers with repeated multiplication

    a=rand(1,10000000)*10;

    println("speed of ^4 using pow")
    tic(); test4p = a.^4; toc()
    println("speed of ^4 using multiply")
    tic(); test4m = a.*a.*a.*a; toc()

    println("maximum difference in results");
    max_diff = maximum(abs(test4p-test4m))
end

Calling testpow(), on my Intel(R) Core(TM) i3 M380 @ 2.53GHz laptop, it outputs (after warm-up)

speed of ^4 using pow
elapsed time: 1.074429726 seconds
speed of ^4 using multiply
elapsed time: 0.40391358 seconds
maximum difference in results
1.8189894035458565e-12

So it is the same, except for a different slowness factor and for the absolute values.

But then I wondered — what’s if I just (re)define the .^ operator for integer exponents to perform chain multiplications? So I wrote this “Cristóvão’s style” Julia code:

import Base.(.^)
 
function .^{T<:FloatingPoint, N}(A::Array{T, N}, x::Integer)
    if abs(x) > 42 # the "Answer to the Ultimate Question of Life, the Universe, and Everything"
        A.^float(x)
    elseif x > 1
        B = similar(A)
        @inbounds for i in 1:length(A)
            B[i] = A[i]
            for k in 1:x-1 B[i] *= A[i] end
        end
        B
    elseif x < 0
        B = similar(A)
        @inbounds for i in 1:length(A)
            B[i] = one(T)
            for k in 1:abs(x) B[i] *= A[i] end
            B[i] \= one(T)
        end
        B
    elseif x == 1
        copy(A)
    else   #  x == 0
        ones(A)
    end
end

By running this before the testpow() function definition (and not touching it), I get the following output

speed of ^4 using pow
elapsed time: 0.100119984 seconds <— notice this
speed of ^4 using multiply
elapsed time: 0.40944625 seconds
maximum difference in results
0.0

Cool! A 10x speed-up comparing to default .^.

I think it is really nice to be able to write improveddifferent algorithms in the very same language and then get improvementschanges on other code without having to touch it.
Although I didn’t tried it, I guess one cannot achieve this in MATLAB, or Python 😛

UPDATE: Using real world real numbers (floating point numbers), precision can be lost in chain multiplication exponential implementation.

Advertisements


2 Responses to “Defining Julia element-wise exponentiation of matrices for integer exponents”

  1. 1 Sergio

    What if you did exp(log(X)*Y) where X and Y are your base and exponent? From my Matlab days, that was the faster answer..

    • I tried B[i,j] = exp(log(A[i,j])*x) and it gives more or less the same times of the original .^.
      I also tried directly doing B[i,j] = A[i,j]^x and got the same times.

      I think the thing here is that we are taking advantage of knowing the exponent is an integer, for which exp(log(X)*Y) is slower for small exponents. As pointed by this StackExchange page, there are even faster ways to do integer exponentiation, as exponentiation-by-squaring, but numerical precision issues can arise.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s