-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use xexpression as input for fft functions #37
Comments
I think I can implement that. But xtensor-fftw calls @martinRenou Are there any reason to accept an xexpression as input? |
Ok I see your point. There would be some benefits of using an xexpression as input if there are other operations on the input before calling |
It should be possible to postpone evaluation by using something derived from |
@egpbos I'll take on this. May you shed some light on why accepting xexpressions are benificial? It will help me on getting it workign a lot. Evaluating |
@egpbos an xfunction is an
I don't know the code base of template <class T>
xarray<T> xtensor_fftw_function(const xarray<T>& arr)
{
return fftw_function(arr.cbegin(), arr.cend());
} Then you won't really benefit from taking an In the case of having functions like this one: (perform some operations using xtensor before calling the template <class T>
xarray<T> xtensor_fftw_function(const xarray<T>& arr1, const xarray<T>& arr2)
{
xarray<T> sum = arr1 + arr2;
return fftw_function(sum.cbegin(), sum.cend());
} Then you would benefit from using template <class T1, class T2>
auto xtensor_fftw_function(const xexpression<T1>& e1, const xexpression<T2>& e2) // Inputs won't be evaluated
{
auto sum = arr1 + arr2; // This won't be evaluated (here it is an xfunction)
return fftw_function(sum.cbegin(), sum.cend()); // It will be evaluated just once here
} Maybe it is not relevant in the case of |
@martinRenou Could replacing xarray can be an anti-optimization?. I haven't measure the compile time and it's purely my guess. What do you think? |
Would be good to figure out... But there's also the third time dimension of programmer/maintenance time :) I had in my mind that switching to xexpressions would make things easier to maintain, since we won't have to duplicate code to also support xtensors (instead of just xarrays). Maybe I'm wrong though. |
I think you are right.
You can assign an xtensor to an xarray (if the dimension matches). So, I guess our only way to optimize the code would be to use |
I don't see problems with that. They all have a |
Well, having a So, the thing is that:
|
Right. What I was referring to is that the current implementation uses |
Oh ok I see. Yeah, that would work I guess! |
My bad, after discussing with wolf, this would not work. template <class S>
void foo(const xtensor_fixed<double, S>& a)
{...} It would only work with Wolf's idea is that we should take an |
Well, |
By the way. How effective is xtensor_fixed? #include <xtensor/xfixed.hpp>
int main() {
xt::xtensor_fixed<float, xt::xshape<5, 5>> v = xt::ones<int>({5,5});
return xt::sum(v)[0];
} Edit: |
I had to ask Wolf for that question :P Concerning your code, maybe try not to mix #include <xtensor/xfixed.hpp>
#include <xtensor/xtensor.hpp>
int main() {
xt::xtensor_fixed<int, xt::xshape<5, 5>> v(xt::xshape<5, 5>{}, 1);
return xt::sum<int>(v, xt::xshape<0, 1>{}, xt::evaluation_strategy::immediate{})(0);
} |
I don't know if it makes sense and if it's easy to achieve it, but it would be nice to not have to evaluate the xarrays for a call of fft function, meaning that the implementation of the functions would be:
instead of
And maybe the return type could be an xexpression too..
The text was updated successfully, but these errors were encountered: