Next Article in Journal
Comparison of Feature Selection Methods—Modelling COPD Outcomes
Previous Article in Journal
Existence Results and Finite-Time Stability of a Fractional (p,q)-Integro-Difference System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Average Widths and Optimal Recovery of Multivariate Besov Classes in Orlicz Spaces

1
College of Mathematics Science, Inner Mongolia Normal University, Hohhot 010022, China
2
Laboratory of Infinite-Dimensional Hamiltonian System and Its Algorithm Application, Hohhot 010022, China
3
Center for Applied Mathematical Science, Hohhot 010022, China
*
Author to whom correspondence should be addressed.
Submission received: 27 March 2024 / Revised: 27 April 2024 / Accepted: 28 April 2024 / Published: 3 May 2024

Abstract

:
In this paper, we study the average Kolmogorov σ –widths and the average linear σ –widths of multivariate isotropic and anisotropic Besov classes in Orlicz spaces and give the weak asymptotic estimates of these two widths. At the same time, we also give the asymptotic property of the optimal recovery of isotropic Besov classes in Orlicz spaces.

1. Introduction

Ref. [1] has studied the average Kolmogorov σ –widths and average linear σ –widths of multivariate isotropic and anisotropic Besov classes in L p spaces. Research on the widths and optimal recovery of multivariate Besov classes in Orlicz spaces has not been conducted so far, and there are few related articles. This paper carries out some of this work. Orlicz spaces are introduced by Polish mathematician W. Orlicz. Since more than half a century, Orlicz spaces theory has been widely used. It not only provides intuitive background material for functional analysis, but also has many applications in differential equations, integral equations, probability theory, approximation theory of functions, harmonic analysis and other disciplines. As well known, the activity world and metrics provided by L p spaces are very effective for discussing problems such as equation solving and approximation theory of functions. However, L p spaces are only suitable for dealing with linear and at best polynomial type nonlinear problems. Whenever nonlinear problems appear, L p spaces will show its limitations. At this time, people naturally use the expansion of L p spaces–Orlicz spaces as an alternative tool. With the emergence of more complexity problems and nonlinear problems, it has become a choice to study the approximation problems in Orlicz spaces, which is the practical significance of this paper. Orlicz spaces are larger than continuous function spaces and L p spaces; they are an extension of L p spaces. In particular, the Orlicz spaces generated by N-functions that do not satisfy the Δ 2 -condition are a substantial generalization and promotion of L p spaces. Considering that the norm structure of Orlicz spaces is more complex than that of continuous function spaces and L p spaces, it is difficult and of theoretical significance to study the widths and optimal recovery problems in Orlicz spaces, and it can also reflect the characteristics of the function spaces of the study of the approximation problem from ‘small’ to ‘large’.
In this paper, let M ( u ) and N ( v ) be complementary N-functions; the definition of an N-function is as follows.
Definition 1.
A real valued function M ( u ) defined on R is called an N-function if it has the following properties.
(1) M ( u ) is an even continuous convex function, and M ( 0 ) = 0 ;
(2) M ( u ) > 0 for u > 0 ;
(3) lim u 0 M ( u ) u = 0 , and lim u M ( u ) u = .
The complementary N-function is given by N ( v ) = 0 v ( M ) 1 ( u ) d u . Properties of N-functions are discussed in reference [2]. The norm in Orlicz spaces is
u M ( R d ) = sup ρ ( v ; N ) 1 R d u ( x ) v ( x ) d x .
All measurable functions { u ( x ) } with finite Orlicz norms constitute the Orlicz space L M * ( R d ) associated with the N-function M ( u ) , where ρ ( v ; N ) = R d N v ( x ) d x expresses the modulus of v ( x ) with respect to N ( v ) . Here, u ( x ) = u ( x 1 , , x d ) , v ( x ) = v ( x 1 , , x d ) , etc., are functions of d elements. For convenience, denote · M = · M ( R d ) . According to ref. [2], the Orlicz norm can also be calculated by
u M = inf β > 0 1 β 1 + R d M β u ( x ) d x .
In this paper, C is used to represent a constant, and in different places its value can be different.
Let α > 0 and P α : = χ α ( · ) x ( · ) be the continuous linear operator on L M * ( R d ) , where χ α ( · ) is the characteristic function of [ α , α ] d . Let ε > 0 and L be the subspace of L M * ( R d ) , and define
K ε α , L , L M * ( R d ) : = min { n Z + | d n P α ( L B L M * ( R d ) ) , L M * ( R d ) < ε } ,
where d n ( A , X ) represents the Kolmogorov n-widths of A in X, see refs. [3,4]. The average dimension of L in L M * ( R d ) is defined as
dim ¯ L , L M * ( R d ) : = lim ε 0 lim inf α K ε α , L , L M * ( R d ) ( 2 α ) d .
Let σ > 0 and S be the centrally symmetric subset of L M * ( R d ) . The average Kolmogorov σ –widths (average σ K widths) of S in L M * ( R d ) are defined by
d ¯ σ ( S , L M * ( R d ) ) : = inf L sup x ( · ) S inf y ( · ) L x ( · ) y ( · ) M ,
where the first infimum takes all subspaces L L M * ( R d ) , which satisfy dim ¯ L , L M * ( R d ) σ . The average linear σ –widths (average σ L widths) of S in L M * ( R d ) are defined by
d ¯ σ ( S , L M * ( R d ) ) : = inf ( Y , Λ ) sup x ( · ) S x ( · ) Λ x ( · ) M ,
where the infimum takes all pairs ( Y , Λ ) such that, for each pair ( Y , Λ ) , Y is the normed space, which is continuously imbedded in L M * ( R d ) , S Y , Λ is the continuous linear operator from Y to L M * ( R d ) , and dim ¯ I m Λ , L M * ( R d ) σ , where I m Λ represents the range of the operator Λ .
By definition, we have
d ¯ σ ( S , L M * ( R d ) ) d ¯ σ ( S , L M * ( R d ) ) .
Suppose that k N , for every f L M * ( R d ) ,
Δ t k f ( x ) = l = 0 k ( 1 ) l + k k l f ( x + l t )
is the k-th difference of f at the point x with step t, where k l = k ! l ! ( k l ) ! . We use Δ t j k f ( x ) to denote Δ t k f ( x ) when t = ( 0 , , 0 , t j , 0 , , 0 ) .
Definition 2.
Let k N , r > 0 , k r > 0 , 1 θ , and we say f B M θ r ( R d ) if f satisfies the following conditions:
(1) f L M * ( R d ) ,
(2)
f b M θ r ( R d ) : = R d Δ t k f ( · ) M | t | r θ d t | t | d 1 / θ < , 1 θ < , sup | t | 0 Δ t k f ( · ) M | t | r < , θ = ,
where | · | is the Euclidean norm.
By ref. [5], the linear space B M θ r ( R d ) is a Banach space with the norm
f B M θ r ( R d ) : = f M + f b M θ r ( R d )
and is an isotropic Besov space.
Definition 3.
Let k = ( k 1 , , k d ) Z + d , r = ( r 1 , , r d ) , r j > 0 , k j > r j , j = 1 , , d , 1 θ . We say f B M θ r ( R d ) if f satisfies the following conditions:
(1) f L M * ( R d ) ,
(2) For j = 1 , , d , we have
f b x j M θ r j ( R d ) : = R Δ t j k j f ( · ) M | t j | r j θ d t j | t j | 1 / θ < , 1 θ < , sup t j 0 Δ t j k j f ( · ) M | t j | r j < , θ = .
By ref. [5], the linear space B M θ r ( R d ) is a Banach space with the norm
f B M θ r ( R d ) : = f M + j = 1 d f b x j M θ r j ( R d )
and is an anisotropic Besov space. By ref. [5], B M θ r ( R d ) = B M θ r , , r ( R d ) when r 1 = = r d .
For real vector M = ( M 1 , , M d ) , M j > 0 , j = 1 , , d , we define
S M θ r b ( R d ) : = { f L M * ( R d ) : f b M θ r ( R d ) 1 } ,
S M θ r B ( R d ) : = { f L M * ( R d ) : f B M θ r ( R d ) 1 } ,
S M θ r b ( R d ) : = { f L M * ( R d ) : f b x j M θ r j ( R d ) M j , j = 1 , , d } ,
S M θ r B ( R d ) : = { f L M * ( R d ) : f B M θ r ( R d ) 1 } .
Let ρ > 0 , ν = ( ν 1 , , ν d ) , ν i > 0 , i = 1 , , d . Define B ν M ( R d ) as the set of all those functions from L M * ( R d ) in which, for each function f, the support of the Fourier transform f ^ in the distributional sense of f is contained in [ ν 1 , ν 1 ] × × [ ν d , ν d ] . The Schwartz theorem states that B ν M ( R d ) coincides with the set of all continued analytically entire functions of type ω ν in L M * ( R d ) . Here, ω ν means that ω j ν j , j = 1 , , d for every ω R + d = { x R d : x j > 0 , j = 1 , , d } .
In this paper, we study the average Kolmogorov widths, average linear widths, and the optimal recovery problem of the Besov classes S M θ r b ( R d ) , S M θ r B ( R d ) , S M θ r b ( R d ) , and S M θ r B ( R d ) .

2. Average Widths Problem

Lemma 1
([6,7]). Let ρ > 0 , ν = ( ν 1 , , ν d ) , ν i > 0 , i = 1 , , d . Then,
dim ¯ B ν M ( R d ) , L M * ( R d ) = ν 1 ν d ( π ) d .
Let B X represent the unit ball of X.
Lemma 2
([3]). If 1 n < dim ( X ) , then
d n ( B X , X ) = 1 ,
where d n ( A , X ) represents the usual Kolmogorov n-width of A in X, while X is a normed linear space, and A is the subset of X.
Theorem 1.
Let k = ( k 1 , , k d ) Z + d , r = ( r 1 , , r d ) , k j > r j > 0 , j = 1 , , d , 1 θ , σ 1 . Then,
(1)
μ σ a d ¯ σ A , L M * ( R d ) d ¯ σ A , L M * ( R d ) sup f A f T ρ 1 , , ρ d f M μ σ a ,
where
A = S M θ r b ( R d ) or S M θ r B ( R d ) , a = ( j = 1 d 1 / r j ) 1 ,
μ = j = 1 d M j a / r j ( μ = 1 when A = S M θ r B ( R d ) ) ,
and the definition of T ρ 1 , , ρ d f is given in the proof below.
(2) B ρ ( σ ) M ( R d ) is the weakly asymptotic optimal subspace of average σ for d ¯ σ A , L M * ( R d ) , where ρ ( σ ) = ( ρ 1 ( σ ) , , ρ d ( σ ) ) , ρ i ( σ ) > 0 is defined by ρ j ( σ ) = ( μ 1 M j σ a ) 1 / r j ( ρ j ( σ ) = σ a / r j when A = S M θ r B ( R d ) ) , j = 1 , , d .
Proof. 
To find the upper bound, first of all, we construct the following continuous linear operators from B M θ r ( R d ) to L M * ( R d ) . For every f L M * ( R d ) , t R d and natural number l, we have
( 1 ) l + 1 Δ t l f ( x ) = ( 1 ) l + 1 j = 0 l ( 1 ) l + j l j f ( x + j t ) = j = 1 l d j f ( x + j t ) f ( x ) ,
where j = 1 l d j = 1 . For any real number ν > 0 , let
g ν ( t ) = λ ν , s 1 sin ν t t 2 s ( t R , 2 s > 1 )
be an even entire function of one variable exponential type 2 s ν , where λ ν , s = R ( sin ν t / t ) 2 s d t ν 2 s 1 , ν . Let ρ = ( ρ 1 , , ρ d ) , ρ i > 0 , i = 1 , , d . For every f B M θ r ( R d ) , let
T ρ i ( f , x ) : = R g ρ i ( t i ) ( 1 ) k i + 1 Δ t i k i f ( x ) + f ( x ) d t 1 = R g ρ i ( t i ) j = 1 k i d j f ( x 1 , , x i 1 , x i + j t i , x i + 1 , , x d ) d t i = R G ρ i ( t i x i ) f ( x 1 , , x i 1 , t i , x i + 1 , , x d ) d t i ,
where G ρ i ( t ) = j = 1 k i ( d j / j ) g ρ i ( t / j ) . By ref. [5], G ρ i ( t ) is an entire function of one variable of exponential type 2 ρ i s . Let
T ρ 1 , , ρ n ( f , x ) : = R n G ρ 1 ( u 1 ) G ρ n ( u n ) f ( x 1 + u 1 , , x n + u n , x n + 1 , , x d ) d u ,
1 n d . Then, T ρ 1 , , ρ n is the d variables entire function of exponential type ρ = ( 2 s ρ 1 , , 2 s ρ d ) . Let 2 s > d + max { r i , i = 1 , , d } . Using the Minkowski inequality and Hölder inequality, we have
f ( · ) T ρ 1 ( f , · ) M = sup ρ ( v , N ) 1 R d f ( x ) T ρ 1 ( f , x ) v ( x ) d x sup ρ ( v , N ) 1 R d f ( x ) T ρ 1 ( f , x ) v ( x ) d x = sup ρ ( v , N ) 1 R d f ( x ) R g ρ 1 ( t 1 ) ( 1 ) k 1 + 1 Δ t 1 k 1 f ( x ) + f ( x ) d t 1 v ( x ) d x = sup ρ ( v , N ) 1 R d R g ρ 1 ( t 1 ) Δ t 1 k 1 f ( x ) d t 1 v ( x ) d x sup ρ ( v , N ) 1 R R d Δ t 1 k 1 f ( x ) v ( x ) d x g ρ 1 ( t 1 ) d t 1 R Δ t 1 k 1 f ( · ) M g ρ 1 ( t 1 ) d t 1 = R Δ t 1 k 1 f ( · ) M | t 1 | r 1 + ( 1 / θ ) | t 1 | r 1 + ( 1 / θ ) g ρ 1 ( t 1 ) d t 1 R Δ t 1 k 1 f ( · ) M | t 1 | r 1 + ( 1 / θ ) θ d t 1 1 / θ × R | t 1 | ( r 1 + ( 1 / θ ) ) θ | g ρ 1 ( t 1 ) | θ d t 1 1 / θ C ρ 1 r 1 f b x 1 M θ r 1 ( R d ) ,
where 1 / θ + 1 / θ = 1 . In addition, we have
T ρ 1 ( f , · ) T ρ 1 , ρ 2 ( f , · ) M = R G ρ 1 ( t 1 ) f ( x 1 + t 1 , x 2 , , x d ) d t 1 R 2 G ρ 1 ( t 1 ) G ρ 2 ( t 2 ) f ( x 1 + t 1 , x 2 + t 2 , x 3 , , x d ) d t 1 d t 2 M = R G ρ 1 ( t 1 ) h 1 ( x 1 + t 1 , x 2 , , x d ) d t 1 M R g ρ 1 ( t 1 ) h ( · ) M d t 1 = h ( · ) M ,
where
h ( x 1 , x 2 , , x d ) = f ( x 1 , x 2 , , x d ) R G ρ 2 ( t 2 ) f ( x 1 , x 2 + t 2 , x 3 , , x d ) d t 2 .
Similar to (2), we have
h ( · ) M C ρ 2 r 2 f b x 2 M θ r 2 ( R d ) .
Inductively, for 2 j d , we have
T ρ 1 , , ρ j 1 ( f , · ) T ρ 1 , , ρ j ( f , · ) M C ρ j r j f b x j M θ r j ( R d ) .
Therefore, from (3), we have
f ( · ) T ρ 1 , , ρ d ( f , · ) M = f ( · ) T ρ 1 ( f , · ) + T ρ 1 ( f , · ) T ρ 1 , ρ 2 ( f , · ) + T ρ 1 , , ρ d ( f , · ) M C j = 1 d ρ j r j f b x j M θ r j ( R d ) .
By (4), we have
T ρ 1 , , ρ d ( f , · ) M f M + j = 1 d f b x j M θ r j ( R d ) max { 1 , C ¯ } ,
where C ¯ = C max { ρ j r j , 1 j d } . Therefore, the operator Λ 2 : B M θ r ( R d ) L M * ( R d ) , Λ 2 f ( · ) = T ρ 1 , , ρ d ( f , · ) is continuous and linear. Let 2 s ρ j = ρ j ( σ ) = ( μ 1 M j σ a ) 1 / r j (let ρ j ( σ ) = σ a / r j when A = S M θ r B ( R d ) ). Hence, by (4) and Lemma 1, we have
d ¯ σ ( A , L M * ( R d ) ) sup f A f ( · ) T ρ 1 , , ρ d ( f , · ) M C sup f A j = 1 d ρ j r j f b x j M θ r j ( R d ) μ σ a .
To estimate the lower bound, let λ = ( λ 1 , , λ d ) , λ i = ( M i μ 1 ( 2 σ ) a ) 1 / r i ( λ i = ( 2 σ ) a / r i when A = S M θ r B ( R d ) ), i = 1 , , d , and the non-zero function ϕ ( x ) C ( R ) with s u p p ( ϕ ) [ 0 , 1 ] . For every j = ( j 1 , , j d ) Z d and every t = ( t 1 , , t d ) R d , let
Φ j , λ ( t ) : = k = 1 d ϕ ( λ k 1 t k j k ) ,
then, Φ j , λ ( t ) C ( R d ) , s u p p Φ j , λ Δ j , λ : = [ j 1 λ 1 , ( j 1 + 1 ) λ 1 ] × × [ j d λ d , ( j d + 1 ) λ d ] .
For any N > 0 , let m i ( N ) : = [ N λ i 1 ] . Define the following set of functions
L m , λ = s p a n { Φ j , λ ( t ) : m k j k m k 1 , k = 1 , , d } ,
then the dimension of space L m , λ is m 2 ¯ = i = 1 d ( 2 m i ) . For any f L m , λ , it is easy to see that
s u p p f [ m 1 λ 1 , m 1 λ 1 ] × × [ m d λ d , m d λ d ] [ N , N ] d .
If
f ( t ) = j i = m 1 m 1 1 j d = m d m d 1 a j 1 , , j d Φ j , λ ( t ) ,
then
f M = sup ρ ( v , N ) 1 R d j 1 = m 1 m 1 1 j d = m d m d 1 a j 1 , , j d Φ j , λ ( t ) v ( t ) d t = sup ρ ( v , N ) 1 R d j 1 = m 1 m 1 1 j d = m d m d 1 a j 1 , , j d k = 1 d ϕ ( λ k 1 t k j k ) v ( t ) d t = j = 1 d λ j ϕ M [ 0 , 1 ] d a j l m 2 ¯ ,
where a j l m 2 ¯ = j 1 = m 1 m 1 1 j d = m d m d 1 a j 1 , , j d .
By the Minkowski inequality, we have
Δ t i k i f ( · ) M = 0 t i d u 1 0 t i k i x i k i × f ( x 1 , , x i + u 1 + + u k , x i + 1 , , x d ) d u k i M = 0 t i d u 1 0 t i j 1 = m 1 m 1 1 j d = m d m d 1 a j 1 , , j d ϕ ( k i ) × ( λ i 1 ( x i + u 1 + + u k i ) j i ) λ i k i s i d ϕ ( λ s 1 x s j s ) d u k i M 0 | t i | d u 1 0 | t i | j 1 = m 1 m 1 1 j d = m d m d 1 a j 1 , , j d ϕ ( k i ) × ( λ i 1 ( x i + u 1 + + u k i ) j i ) λ i k i s i d ϕ ( λ s 1 x s j s ) M d u = j = 1 d λ j λ i k i ϕ ( k i ) M [ 0 , 1 ] ϕ M [ 0 , 1 ] d 1 a j l m 2 ¯ | t i | k i = C j = 1 d λ j λ i k i | t i | k i a j l m 2 ¯ .
By (6), we have
Δ t i k i f ( · ) M C f M C j = 1 d λ j a j l m 2 ¯ .
Hence, by (6) and (7), we have
Δ t i k i f ( · ) M C j = 1 d λ j a j l m 2 ¯ min { 1 , ( λ i 1 | t i | ) k i } .
In addition, for 1 θ < , we have
f b t i M θ r i ( R d ) = R | | Δ t i k i f ( · ) M | t i | r i θ d t i | t i | 1 / θ C j = 1 d λ j a j l m 2 ¯ × 0 λ i λ i k i θ R ( k i r i ) θ 1 d R + λ i R r i θ 1 d R 1 / θ = C j = 1 d λ j λ i r i a j l m 2 ¯ .
For θ = , (8) is also valid. Let
δ N : = j = 1 d λ j μ 1 ( 2 σ ) a C N ( C N = ϕ M [ 0 , 1 ] d + max C ) ,
Q N ( δ N ) : = { f L m , λ : a j l m 2 ¯ δ N 1 } .
Then, Q N A .
Now, we estimate the quantity d ¯ σ ( A , L M * ( R d ) ) . Let A be the subspace of L M * ( R d ) and its average dimension dim ¯ A , L M * ( R d ) σ . By the definition of the average dimension, for any N > 0 , ε > 0 , there exists a subspace A 1 L M * ( I N d ) with dimension K : = dim A 1 = K ε ( N , A , L M * ( I N d ) ) such that
E B ( A ) | I N d , A 1 , L M * ( I N d ) ε ,
where B ( A ) represents the unit ball of space A. In addition, for any g A , we have
e g | I N d , A 1 , L M * ( I N d ) ε g M ;
here, e ( x , B , X ) : = inf y ( · ) B x ( · ) y ( · ) X denotes the distance of element x and B, while B is the subset of linear normed space X. Hence, for any f A and any g A , we have
f g M f g M ( I N d ) e ( f , A 1 , L M * ( I N d ) ) e ( g , A 1 , L M * ( I N d ) ) e ( f , A 1 , L M * ( I N d ) ) ε g M e ( f , A 1 , L M * ( I N d ) ) ε f g M ε f M .
Hence,
( 1 + ε ) f g M e ( f , A 1 , L M * ( I N d ) ) ε f M .
In addition, we also have
( 1 + ε ) E ( A , A , L M * ( I N d ) ) E ( Q N , A 1 , L M * ( I N d ) ) ε sup f Q N f M .
By (5), (9), (10), and Lemma 2, we have
E ( Q N , A 1 , L M * ( I N d ) ) C j = 1 d λ j δ N 1 d K ( B ( l m 2 ¯ ) , l m 2 ¯ ) = C j = 1 d λ j δ N 1 = C μ σ a .
By (11) and (12), let N , ε 0 , and we obtain
d ¯ σ ( A , L M * ( R d ) ) μ σ a .
By (1), we finish the proof of the Theorem.  □
Since B M θ r ( R d ) = B M θ r , , r ( R d ) , taking M j = 1 , r j = r , j = 1 , , d , by Theorem 1, we have the following.
Corollary 1.
Let k N , r > 0 , k r > 0 , 1 θ < , σ 1 . Then,
(1)
σ r / / d d ¯ σ U , L M * ( R d ) d ¯ σ U , L M * ( R d ) sup f U f T ρ 1 , , ρ d f M μ σ r / d ,
where U = S M θ r b ( R d ) or S M θ r B ( R d ) .
(2) B ρ ( σ ) M ( R d ) is a weakly asymptotic optimal subspace of average dimension σ for d ¯ σ U , L M * ( R d ) , where ρ ( σ ) 0 is defined by ρ ( σ ) = σ 1 / d .

3. Optimal Recovery Problem

By ref. [8], be similar to the definition in [9,10], for σ > 0 , let Θ σ be the set of all sequences ξ = { ξ ν } ν Z d of points ξ ν in R d , which satisfies the following conditions:
(1) For ν , ν Z d , | ξ ν | | ξ ν | if and only if | ν | | ν | ,
(2) For ν , ν Z d , ξ ν ξ ν if and only if ν ν ,
(3)
c a r d ¯ ξ : = lim inf c c a r d ξ [ c , c ] d ( 2 c ) d σ ,
where | · | is the usual Euclidean norm, and for any c > 0 , c a r d ξ [ c , c ] d denotes the number of elements of the set ξ [ c , c ] d .
Let X ( R d ) be the normed space of functions on R d with the norm · X , and for the set A , B of X ( R d ) , let
E ( A , B , X ) : = sup x ( · ) A inf y ( · ) B x ( · ) y ( · ) X .
Let K X ( R d ) , and the quantity
d ( K ) : = sup x ( · ) , y ( · ) K x ( · ) y ( · ) X
is called the diameter of K. For ξ Θ σ , the information of f K is defined by I ξ f = { f ( ξ ν ) } ν Z d . I ξ is called a standard sampling operator of the average cardinality σ . The quantity
Δ σ ( K , X ) : = inf ξ Θ σ sup f K d I ξ 1 I ξ f K
is called the minimum information diameter of the set K in space X ( R d ) . If K is the balanced and convex subset of X ( R d ) , then
Δ σ ( K , X ) = 2 inf ξ Θ σ sup { f X : I ξ f = 0 , f K } .
For every ξ Θ σ , the mapping φ : I ξ ( K ) X ( R d ) is called an algorithm, and φ · I ξ f is called a recovering function of f in X ( R d ) . Use Φ ξ to represent the set of all algorithms on K. If φ can be extended to a linear operator on the linearized set of K, we call the algorithm φ linear. Use Φ ξ L to represent the set of all linear algorithms on the linearized set of K. The quantity
E σ ( K , X ) : = inf ξ Θ σ inf φ Φ ξ sup f K f φ ( I ξ f ) X
is called the minimum intrinsic error of the optimal recovery of the set K in the space X. Taking Φ ξ L to replace Φ ξ in (13), and corresponding to this, we obtain E σ L ( K , X ) , which we call the minimum linear intrinsic error. If K is a convex and centrally symmetric subset of X, then by ref. [11], the following inequality holds.
1 2 Δ σ ( K , X ) E σ ( K , X ) E σ L ( K , X ) .
Let l be an even number, 0 < α < l , similar to ref. [12], for every f L M * ( R d ) , and define the following differential operator
( D α f ) ( x ) : = lim L M * ( R d ) , ε 0 + ( D ε α f ) ( x ) ,
where D ε α is defined by
( D ε α f ) ( x ) : = 1 m d , l ( α ) | y | ε Δ y l f ( x ) | y | d + α d y ,
m d , l ( α ) : = R d e i y 1 / 2 e i y 1 / 2 l | y | d + α d y ,
where y = ( y 1 , y 2 , , y d ) R d . For α > 0 , let
W M α ( R d ) : = { f L M * ( R d ) C ( R d ) : D α f M < } .
Let r be an even number, for any f L M * ( R d ) , and let
Δ y r f ( x ) = j = 0 r ( 1 ) j j r f x + r 2 j y = τ y / 2 τ y / 2 r f ( x ) ,
where τ y f ( x ) = f ( x y ) , x , y R d . For any real number ρ 1 and s N , the function
k ρ , s ( t ) = sin ρ t t 2 s , t R , 2 s > d + α
is a univariate entire function of exponential type 2 s ρ . k ρ , s ( | x | ) ( x R d ) is a multivariate entire function of spherical exponential type 2 s ρ . Let
K ρ , s ( x ) = λ ρ , s 1 k ρ , s ( | x | ) ,
where λ ρ , s = R d k ρ , s ( | x | ) d x ρ 2 s d , ρ (See ref. [13]).
Define
( T ρ , r f ) ( x ) = : f ( x ) ( 1 ) r / 2 r / 2 r 1 R d Δ y r f ( x ) K ρ , s ( y ) d y ;
then, we have
Lemma 3.
For α > 0 , let r N ( r > α ) be an even number, when α d 0 , 2 , 4 , ; then, we have
f T ρ , r f M C ρ α D α f M .
Proof. 
By the Minkowski inequality, we have
f T ρ , r f M = sup ρ ( v ; N ) 1 R d f ( x ) f ( x ) ( 1 ) r / 2 r / 2 r 1 R d Δ y r f ( x ) K ρ , s ( y ) d y v ( x ) d x sup ρ ( v ; N ) 1 R d r / 2 r 1 R d Δ y r f ( x ) K ρ , s ( y ) d y v ( x ) d x R d Δ y r f M K ρ , s ( y ) d y .
By ref. [12], we have
Δ y r f ( x ) = R d ( Δ y r φ α ) ( u ) ( D α f ) ( x u ) d u ,
where φ α C | x | α d for α d 0 , 2 , 4 , and φ α C | x | α d log | x | for α d = 0 , 2 , 4 , .
For (16), by the Minkowski inequality, we have
Δ y r f M = sup ρ ( v ; N ) 1 R d R d ( Δ y r φ α ) ( u ) ( D α f ) ( x u ) d u v ( x ) d x R d ( Δ y r φ α ) ( u ) sup ρ ( v ; N ) 1 R d ( D α f ) ( x u ) v ( x ) d x d u Δ y r φ α 1 D α f M .
By ref. [12], it can easy to see that Δ y r φ α 1 C | y | α , | y | 0 for α d 0 , 2 , 4 , , and Δ y r φ α 1 C | y | α log | y | 1 , | y | 0 for α d = 0 , 2 , 4 , . So for α d 0 , 2 , 4 , , we have
f T ρ , r f M D α f M R d C | y | α K ρ , s ( y ) d y C ρ α D α f M ( ρ ) .
The proof of the Lemma 3 is complete.  □
For ρ > 0 , let
S β , ρ f ( x ) : = ν Z d f ν ρ L β ( ρ x ν ) ,
where L β ( x ) satisfies that L β ( ν ) = δ ν , 0 , ν Z d , and its generalized Fourier transform is
L β ^ ( y ) = ( 2 π ) d / 2 | y | β ν Z d | y 2 ν π | β .
Similar to the proof of Lemma 3, we obtain the following.
Let α > 0 , ρ > 0 , β α , and β > d ; then, for every f W M α ( R d ) ( α d 0 , 2 , 4 , ) , there exists a constant C > 0 such that
f S β , ρ f M C ρ α D α f M .
For λ > 0 , define S B λ M ( R d ) as the set of all the entire functions of spherical exponential type λ ; then, we have the following.
Lemma 4.
Let λ > 0 , σ > 0 , and, for almost all f S B λ M ( R d ) , there exists a constant C > 0 such that
D α f M C λ α f M .
Proof. 
By the definition of D α f , we have
D α f M = sup ρ ( v ; N ) 1 R d 1 m d , l ( α ) | y | ε Δ y l f ( x ) | y | d + α d y v ( x ) d x C R d Δ y l f M | y | d + α d y .
Because f S B λ M ( R d ) , it is easy to prove that
Δ y l f M = sup ρ ( v ; N ) 1 R d j = 0 r ( 1 ) j j r f x + r 2 j y v ( x ) d x j = 0 r sup ρ ( v ; N ) 1 R d j r f x + r 2 j y v ( x ) d x C f M min { 1 , ( | y | λ ) l } .
So by (19) and (20), we have
D α f M C f M λ l 0 λ 1 t l α 1 d t + λ 1 t α 1 d t = C λ α f M .
Thus, the proof of the Lemma is complete.  □
Theorem 2.
Let k N , r > 0 , k r > 0 , 1 θ , σ 1 ; then,
σ r / d 1 2 Δ σ ( S M θ r B ( R d ) , L M * ( R d ) ) E σ ( S M θ r B ( R d ) , L M * ( R d ) ) E σ L ( S M θ r B ( R d ) , L M * ( R d ) ) σ r / d .
Proof. 
Let us complete the upper estimate first. For every f S M θ r B ( R d ) , by ref. [5], in the sense of L M * ( R d ) , f can be represented by the series that converges it; i.e., f ( x ) = l Z + Q a l ( x ) , Z + : = { 0 , 1 , } , while the terms of series are an entire function of spherical exponential type a l , a > 1 such that
f B M θ r ( R d ) l Z + a l r θ Q a l M θ 1 / θ , 1 θ < sup l Z + a l r Q a l M , θ = .
Let α ( 0 , r ) , β > r , for ρ > 1 , let N be a natural number that satisfies ρ < a N < 2 ρ , for 0 l N 1 , by (18) and Lemma 4, we have
Q a l S β , ρ Q a l M C ρ β D β Q a l M C ρ β a l β Q a l M ,
and for l N , we have
Q a l S β , ρ Q a l M C ρ α D α Q a l M C ρ α a l α Q a l M .
Hence, by (22) and (23), we have
f S β , ρ f M l = 0 Q a l S β , ρ Q a l M = l = 0 N 1 + l = N Q a l S β , ρ Q a l M ρ β l = 0 N 1 a l β Q a l M + ρ α l = N a l α Q a l M .
By (21) and Hölder inequality, we have
l = 0 N 1 a l β Q a l M l = 0 N 1 a l r θ Q a l M θ 1 / θ l = 0 N 1 a l ( β r ) θ 1 / θ f B M θ r ( R d ) a N ( β r ) ρ β r f B M θ r ( R d ) ,
and
l = N a l α Q a l M l = N a l r θ Q a l M θ 1 / θ l = N a l ( α r ) θ 1 / θ f B M θ r ( R d ) a N ( r α ) ρ α r f B M θ r ( R d ) ,
for 1 < θ < .
By (24) to (26), we have
f S β , ρ f M ρ β · ρ β r + ρ α · ρ α r f B M θ r ( R d ) ρ r f B M θ r ( R d ) .
For θ = 1 , , (27) is also valid. Let ρ = σ 1 / d . By (27), we have
E σ L ( S M θ r B ( R d ) , L M * ( R d ) ) sup f S M θ r B ( R d ) f S β , σ d f M σ r / d .
Now, let us complete the lower estimate. For every ξ Θ σ , i.e.,
c a r d ¯ ξ = lim inf c c a r d ξ [ c , c ] d ( 2 c ) d σ ,
there exists a cube with the following form
Q = { x R d : a j x j a j + m 1 , j = 1 , , d } , m = ( 2 σ ) 1 / d
such that its interior I n t Q does not contain any point of ξ , that is I n t Q ξ = . Thus, it can be seen that | Q | = ( 2 σ ) 1 . Let λ ( t ) , t R be the univariate function satisfing the following conditions: λ ( t ) C ( R ) , s u p p λ [ 0 , 1 ] , 0 λ ( t ) 1 for t R , λ ( t ) = 1 for t [ 1 4 , 3 4 ] . For 1 θ < , let
f 0 ( x ) = η j = 1 d λ ( m ( x j a j ) ) ,
where η is a positive number to be determined. It is easy to see that f 0 ( x ) C ( R d ) , s u p p f 0 Q , I ξ f 0 = 0 , and by ref. [1], we have
f 0 M = sup ρ ( v ; N ) 1 R d f 0 ( x ) v ( x ) d x C R d f 0 ( x ) d x C η m d .
It is easy to see that
Δ t k f 0 ( · ) M C η m d min { 1 , ( m | t | ) k } .
In addition, we have
f 0 b M θ r ( R d ) C η m d 0 m 1 m k θ t ( k r ) θ 1 d t + m 1 t r θ 1 d t 1 / θ C η m d + r .
For θ = , (29) is also valid. By (28) and (29), if we let η = m d r C 1 , then f 0 S M θ r B ( R d ) . Let
Q ¯ = x R d : a i + 1 4 m x i a i + 3 4 m , i = 1 , , d ,
and for every ξ Θ σ , we have
d I ξ 1 I ξ f 0 S M θ r B ( R d ) f 0 M f 0 M ( Q ¯ ) C m d r ( 2 m ) d σ r / d .
By (30) and the definition of Δ σ S M θ r B ( R d ) , L M * ( R d ) , we have
Δ σ S M θ r B ( R d ) , L M * ( R d ) σ r / d .
By (14), the proof of the Theorem is complete.  □
Comparing refs. [10,14,15], the study of approximation problems in Orlicz spaces has potential application value and development prospect.

Author Contributions

Writing—original draft, X.L. and G.W.; Writing—review and editing, X.L. and G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (11761055) and the Fundamental Research Funds for the Inner Mongolia Normal University (2023JBZD007).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jiang, Y.; Liu, Y. Average Widths and Optimal Recovery of Multivariate Besov Classes in Lp(Rd). J. Approx. Theory 2000, 102, 155–170. [Google Scholar] [CrossRef]
  2. Wu, C.; Wang, T. Orlicz Space and Its Applications; Heilongjiang Science and Technology Press: Harbin, China, 1983. (In Chinese) [Google Scholar]
  3. Sun, Y. Approximation Theory of Functions; Beijing Normal University Press: Beijing, China, 1989; Volume 1. (In Chinese) [Google Scholar]
  4. Pinkus, A. N-Widths in Approximation Theory; Springer: New York, NY, USA, 1985. [Google Scholar]
  5. Nikol’skii, S.M. Approximation of Functions of Several Variables and Imbedding Theorems; Springer: New York, NY, USA, 1975. [Google Scholar]
  6. Ding, Z. The sampling theorem, L p T -approximation and ε-dimension. J. Approx. Theory 1992, 70, 1–15. [Google Scholar] [CrossRef]
  7. Wu, G. On approximation by polynomials in Orlicz spaces. Approx. Theory Its Appl. 1991, 7, 97–110. [Google Scholar] [CrossRef]
  8. Sun, Y.; Fang, G. Approximation Theory of Functions; Beijing Normal University Press: Beijing, China, 1990; Volume 2. (In Chinese) [Google Scholar]
  9. Liu, Y. Average σK width of class of smooth functions of Lpq(Rd) in Lq(Rd). Chin. Ann. Math. Ser. B 1995, 16, 351–360. [Google Scholar]
  10. Li, X.; Wu, G. Infinite Dimensional Widths and Optimal Recovery of a Function Class in Orlicz Spaces in L(R) Metric. J. Math. 2023, 2023, 6616280. [Google Scholar] [CrossRef]
  11. Traub, J.F.; Woźniakowski, H.A. A General Theory of Optimal Algorithms; Academic Press: New York, NY, USA, 1980. [Google Scholar]
  12. Samko, S.G. Spaces L p , r α (Rn) and hypersingular integrals. Stud. Math. 1997, 61, 193–230. (In Russian) [Google Scholar]
  13. Liu, Y. Lq(Rd)-optimal recovery on the Reisz potential spaces with incomplete information. J. Beijing Norm. Univ. (Nat. Sci.) 1997, 33, 143–150. [Google Scholar]
  14. Romaniuk, A.S. On the best trigonimetric and bilinear approximations of anisotropic Besov function classes of many variables. Ukr. Mat. J. 1995, 47, 1097–1111. (In Russian) [Google Scholar]
  15. Rabab, E.; Badr, L.; Hakima, O. On some nonlinear elliptic problems in anisotropic Orlicz–Sobolev spaces. Adv. Oper. Theory 2020, 8, 24. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Wu, G. Average Widths and Optimal Recovery of Multivariate Besov Classes in Orlicz Spaces. Mathematics 2024, 12, 1400. https://0-doi-org.brum.beds.ac.uk/10.3390/math12091400

AMA Style

Li X, Wu G. Average Widths and Optimal Recovery of Multivariate Besov Classes in Orlicz Spaces. Mathematics. 2024; 12(9):1400. https://0-doi-org.brum.beds.ac.uk/10.3390/math12091400

Chicago/Turabian Style

Li, Xinxin, and Garidi Wu. 2024. "Average Widths and Optimal Recovery of Multivariate Besov Classes in Orlicz Spaces" Mathematics 12, no. 9: 1400. https://0-doi-org.brum.beds.ac.uk/10.3390/math12091400

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop