I became interested in mathematical blogging after visiting Terence Tao’s and Timothy Gower’s blogs on numerous occasions. It seems there is a sizable number of mathematicians disseminating valuable information through their blogs, and I see this as a healthy sign. Such blogs provide a wealth of information to students like me, and dare I say, I learn most of my math from such blogs!

I intend to write about math mostly as an exercise in exposition. I am assuming this will be of great help to me later on. I also will be posting some problems in the “Problem Corner” section every now and then.

Let’s see how this goes. I am hoping my enthusiasm for blogging will not wear off too soon!

### Like this:

Like Loading...

*Related*

## 4 comments

Comments feed for this article

March 7, 2019 at 8:57 pm

dodonuthus appear important aspects how is sizable the observe prof dr mircea orasanu and prof drd horia orasanu and further that followed other natural situations

April 6, 2019 at 10:29 pm

prof dr mircea orasanuthese are very important and must to continued with other subjects observed prof dr mircea orasanu and prof drd horia orasanu and followed

June 20, 2019 at 7:51 am

prof drd horia orasanuthese title is very important for prof dr mircea orasanu and prof drd horia orasanu and corresponding problem and followed Consider the discrete optimization problem (which we refer to as Problem A)

,

where – is a non-decreasing -order-convex function on a partially set .

Let be an optimal solution of Problem A, and let be the point obtained by the following iterative procedure [4]:

which halts on the step if either or is the maximal element of the set (the set contains the zero , as we have stipulated). This point is called the gradient maximum os the function on the set [4].

By a guaranteed error estimate for the gradient algorithm in Problem A we mean a number

.

By perturbations of problem A by means problem B

,

where is a non-decreasing -order-convex function on a partially set and .

Let be a guaranteed error estimate for the gradient algorithm in some unperturbed (perturbed) discrete optimization problem. As usual (see. [3]), we say that the gradient algorithm is stable if , where as .

Theorem. Let and be guaranteed error estimates for the gradient algorithm in Problems A and B, respectively. Then .

To prove Theorem, we need the following lemma.

Lemma. The gradient maximum and the global maximum of any -ordered-convex non-decreasing function on are connected by the following relations:

, (1)

where

– is the set of all maximal elements of the partially ordered set .

Proof of Lemma. By virtue of item of Theorem 4 [4], we have for

Together with the fact that

the last inequality yields

.

Therefore

,

Where

July 6, 2019 at 8:07 am

prof dr drd horia orasanuwith these specially in case of fundamental chapters these are observed that my first post have an importance very grated gradient maximum os the function on the set [4].

By a guaranteed error estimate for the gradient algorithm in Problem A we mean a number

.

By perturbations of problem A by means problem B

,

where is a non-decreasing -order-convex function on a partially set and .

Let be a guaranteed error estimate for the gradient algorithm in some unperturbed (perturbed) discrete optimization problem. As usual (see. [3]), we say that the gradient algorithm is stable if , where as .

Theorem. Let and be guaranteed error estimates for the gradient algorithm in Problems A and B, respectively. Then .

To prove Theorem, we need the following lemma.

Lemma. The gradient maximum and the global maximum of any -ordered-convex non-decreasing function on are connected by the following relations:

, (1)

where

– is the set of all maximal elements of the partially ordered set .

Proof of Lemma. By virtue of item of Theorem 4 [4], we have for

Together with the fact that

the last inequality yields

.

Therefore

,

Where

Then, by repeating the scheme of the proof of Theorem 4 [4], we obtain estimates (1). Lemma is proved.

Proof of Theorem. According to LemmaFinally we should note some rather confusing facts. Although given Jordan’s work on matrices and the fact that the Jordan normal form is named after him, the Gauss-Jordan pivoting elimination method for solving the matrix equation Ax= b is not. The Jordan of Gauss-Jordan is Wilhelm Jordan (1842 to 1899) who applied the method to finding squared errors to work on surveying. Jordan algebras are called after the German physicist and mathematician Pascual Jordan (1902 to 1980).

Related search

list of great mathematicians

View 3+ more

Évariste Galois

Arthur Cayley

Emmy Noether

Isaac Newton

Euclid

Srinivasa Ramanujan

Pythagora

• ms on the

• limit of a function as x approaches a fixed constant

• limit of a function as x approaches plus or minus infinity

• limit of a function using the precise epsilon/delta definition of limit

• limit of a function using l’Hopital’s rule

,