Thứ Sáu, 29 tháng 12, 2017

Youtube daily up Dec 29 2017

Turning to some stock market action now.

Korea's benchmark KOSPI wrapped up 2017 with its last trading day of the year on Thursday.

Through many ups and downs,... the main bourse ended the year higher than twelve months ago.

Our Kim Mok-yeon gives us a recap of the relatively good year,.. and what the market might be

like in 2018.

Its been a prosperous year for the local stock market, as the Korea Composite Stock Price

Index, after six-years of trading within a narrow range, broke out to surpass the 2,500

mark for the first time in its history.

[STANDUP ed: steve] "For the last trading day of 2017, Korea's

main stock exchange closed at 2,467 up some 22% from the final close of the previous year,

which was just below 2,030."

Experts say the main driving forces were the IT and the health care and finance sectors.

Tech shares started the day brightly on Thursday, with Samsung Electronics moving up 3.24 percent

and number two chipmaker SK Hynix gaining 1.86 percent from the previous session's close.

Pharmaceutical shares also traded higher, with Samsung Biologics up 1.37 percent.

The rising trend is likely to stay for a while, as experts say the KOSPI 3,000 era could be

approaching.

(ENGLISH) "Over the next two years, we think that the

KOSPI will break through 3000 plus, if it goes higher maybe it may even reach 3,300.

The reason is because of the very strong earnings that's coming through right now you can look

at two digit earnings growth again in 2018 and even we think that in 2019 the earnings

growth will continue."

Korea's junior stock index, the KOSDAQ, also had a remarkable year.

The KOSDAQ closed at 798 on Thursday, continuing its upward streak for 4 consecutive days.

The index surpassed the 800 mark for the first time in ten years during trading this month,

after the government laid out measures to support small and mid-sized companies.

The local stock markets will re-open on January second at 10 a.m., one hour later than usual.

Kim Mok-yeon, Arirang News.

For more infomation >> Korean stocks end 2017 up 22%; analysts see another good year - Duration: 2:12.

-------------------------------------------

Korea's consumer prices up 1.5 percent on-year in December - Duration: 0:41.

South Korea's consumer prices accelerated in December by their fastest pace in three

months, marking a rebound from November's one-point-three percent, which was the lowest

gain in 2017.

Statistics Korea says consumer prices went up by one-and-a-half percent on-year in December.

Prices of fresh food went down, with vegetable prices plunging 16 percent on-year.

But petroleum prices jumped seven-point-five percent in December, raising prices for petrochemical

and other industrial goods.

Overall consumer prices for the whole of 2017 jumped one-point-nine percent, the steepest

gain in five years.

For more infomation >> Korea's consumer prices up 1.5 percent on-year in December - Duration: 0:41.

-------------------------------------------

Hot Wheels RACE OFF | Dune It Up- Out of Fuel | Top Racing Gameplay mobile Android/ios 2017 - Duration: 3:10.

Hot Wheels RACE OFF | Dune It Up- Out of Fuel | Top Racing Gameplay mobile Android/ios 2017

For more infomation >> Hot Wheels RACE OFF | Dune It Up- Out of Fuel | Top Racing Gameplay mobile Android/ios 2017 - Duration: 3:10.

-------------------------------------------

Arsenal on alert as Brazilian playmaker weighs up January exit from Barcelona ● News Now ● #AFC - Duration: 1:44.

Barcelona midfielder Rafinha will ask to leave the Catalan club mid way through the January

transfer window if he is not involved in first team matches, according to Mundo Deportivo.

The 24 year old has been sidelined with a knee injury since April, and had to undergo

another operation in October after a setback in his recovery.

He has expected to be fully fit after Spain's winter break, but the player remains uncertain

on his role at the club.

Rafinha is reportedly concerned about losing his chance of playing at the World Cup with

Brazil,

and Mundo Deportivo report that if he isn't involved in the Spanish Cup last 16 clash

against Celta Vigo or for the league games against Levante or Real Sociedad,

he will ask the club whether it is time to move on.

Arsenal, Juventus, Inter Milan and Liverpool have all reportedly tracked the Brazilian

international over the past few months, but none have placed a formal offer on the table

due to the player's injury issues.

Now with the midfielder returning to full fitness, and likely to be available to sign

in January, will the Gunners revive their interest?

For more infomation >> Arsenal on alert as Brazilian playmaker weighs up January exit from Barcelona ● News Now ● #AFC - Duration: 1:44.

-------------------------------------------

* Make Up Tutorial - Burgundy - Especial Festas . Maquilha e Fala. * - Duration: 15:53.

For more infomation >> * Make Up Tutorial - Burgundy - Especial Festas . Maquilha e Fala. * - Duration: 15:53.

-------------------------------------------

Bizmates初級ビジネス英会話 Point 225 "A toss-up" - Duration: 4:17.

Hello everyone, Justin here and welcome to this week's Bizmates for Beginners

video lesson, where every week we introduce a new word, idiom, or expression

to help you with your daily business conversation. OK everyone this week, we

are going to learn "a toss-up." So what does a toss-up mean? Well, stay tuned to

find out, but first everyone, let's do a quick review of last week's class.

Alright, so imagine I asked you: How can we fix this problem? Because I've been

trying to think of this on my own for a long time, and I just can't figure it out.

So if you have any good ideas, please let me know. So what do you say?

Alright I'll give you five seconds to think of a response using the expression

from last week, okay? Are you ready? Alright, go.

OK and time is up everyone.

If you said "if we put our heads together, we can do it" then that's exactly right.

Good job and thank you for remembering last week's expression. Alright so

let's move on to "a toss-up." OK now this is what I sometimes hear:

Alright I'm talking to my colleague, Taro, and I say: Taro, so what will you do

for winter vacation? New Year's is just right around the corner, maybe you're

taking some extra time off. So what are you going to do?

Taro, he says: Well it's difficult to decide between going to the hot springs

or just relaxing at home. Alright, well that sounds really good. Both options

sound really nice okay. So yes I think this is fine, it's difficult to decide

between A or B, but you might hear a native English speaker use something

like this ~ it's a toss-up between going to the hot springs or just relaxing at home.

So both options -- going to the hot springs or staying at home

both options are really good, so it's really difficult to choose A or B. So in

this case you can say "it's a toss-up" okay? It's a toss-up between... pasta or

pizza, okay? Or it's a toss-up between this option and this option, okay? So if

both are equally good, you can use this expression: it's a toss-up. Okay?

Alright, so pronunciation, it's very simple okay? It's as you see it here, so repeat after

me: it's a toss-up. Your turn.

Alright, very good. OK so after my question here:

What will you do for winter vacation?

Alright, good. Yes, okay perfect.

So please remember this for next time, okay everyone? Alright our bonus question

this week is another way to say: "yes I am." If you know what the answer is, you can

leave it in the comments below. If you want to find out what the answer is, you

can find it in one of our previous Bizmates for Beginners video lessons.

OK everyone, so this is our last video lesson of this year before New Year's

time, so I want to wish you all a very happy new year. All the best in 2018 and

I'm looking forward to seeing you in our first video for next year, okay? Alright

so take care everyone, and I'll see you next time. Thank you.

For more infomation >> Bizmates初級ビジネス英会話 Point 225 "A toss-up" - Duration: 4:17.

-------------------------------------------

Disney Junior Doc McStuffins Baby Check Up All In One Nursery Toys - Duration: 6:20.

For more infomation >> Disney Junior Doc McStuffins Baby Check Up All In One Nursery Toys - Duration: 6:20.

-------------------------------------------

The Living Tombstone - Jump Up, Super Star! (+ Русские субтитры) - Duration: 4:16.

For more infomation >> The Living Tombstone - Jump Up, Super Star! (+ Русские субтитры) - Duration: 4:16.

-------------------------------------------

Synthetic Gradients Tutorial - How to Speed Up Deep Learning Training - Duration: 20:25.

Hi, I'm Aurélien Géron, and today I'm going to explain how Synthetic Gradients can

dramatically speed up training of deep neural networks, and even often improve their performance

significantly.

We will also see how they can help recurrent neural networks learn long term patterns in

your data, and more.

Synthetic Gradients were introduced in a paper called "Decoupled Neural Interfaces using

Synthetic Gradients" published on Arxiv in 2016 by Max Jaderberg and other DeepMind

researchers.

As always, I'll put all the links in the video description below.

To explain Synthetic Gradients, let's start with a super quick refresher on Backpropagation.

Here's a simple feedforward neural network that we want to train using backpropagation.

Each training iteration has two phases.

First, the Forward phase: we send the inputs X to the first hidden layer, which computes

its outputs h1 using its parameters theta1, and so on up to the output layer, and finally

we compute the loss by comparing the network's outputs and the labels.

Then the Backward phase.

The algorithm first computes delta3, which are the gradients of the loss with regards

to h3, then these gradients are propagated backwards through the network, until we reach

the first hidden layer.

The final step of Backpropagation uses the gradients we have computed to tweak the parameters

in the direction that will reduce the loss.

This is the gradient descent step.

Okay, that's it for Backpropagation.

Now suppose you want to speed up training.

You buy 3 GPU cards, and you split the neural network in three parts, with each part running

on a different GPU.

This is called model parallelism.

Unfortunately, because of how Backpropagation works, model parallelism is inefficient.

Indeed, to compute the loss, you first need to do a full forward pass sequentially.

Each GPU has to wait for the previous GPU to finish working on a training batch before

it can start working on it.

This is called the Forward Lock.

Notice that the model parameters cannot be updated before the loss is computed.

And this is called the Update Lock.

And finally, we cannot update a layer's parameters before the backward pass is complete,

at least down to the layer we want to update.

This is called the Backward lock.

The consequence of all these locks is that GPUs will spend most of their time waiting

for the other GPUs.

As a result, training on 3 GPUs using model parallelism is actually slower than training

on a single GPU.

So, the main idea behind Synthetic Gradients is to break these locks, in order to make

model parallelism actually work.

Let's see how.

First we send the inputs to the first hidden layer.

Then this layer uses its parameters theta1 to compute its outputs.

So far, nothing has changed.

But now we also send the outputs h1 to a magical little module M1, called a Synthetic Gradient

model.

We'll see how it works in a few minutes, but for now it's just a black box.

This model tries to predict what the gradients for the first hidden layer will be.

It outputs the synthetic gradients delta1 hat, which are an approximation of the true

gradients delta1.

Using these synthetic gradients, we can immediately perform a gradient descent step to update

the parameters theta1, no need to wait.

This hidden layer equipped with its Synthetic Gradient model is effectively decoupled from

the rest of the network.

This is called a Decoupled Neural Interface, or DNI.

In parallel, the second layer can do the same thing.

It uses a second Synthetic Gradient model M2 to predict what the gradients will be for

the second hidden layer.

And it performs a gradient descent step.

And so on up to the output layer.

This time instead of using a Synthetic Gradient model, we might as well compute the true gradients

directly and use these true gradients delta3 to update the parameters theta3.

And we are done!

Notice that we only did a forward pass, no backward pass.

So just like that, training could potentially be up to twice faster.

Just to be clear, the Synthetic Gradient models are only used during training.

After training, we can use the neural network as usual, based on the trained parameters

theta1, 2 and 3.Okay, now let's see how this technique enables model parallelism during

training.

Once again, let's split the network into three parts, each running on a different GPU

card.

And the CPU will take care of loading the training instances and pushing them into a

training queue.

We start by loading the first training batch.

And while the first GPU is computing h1, and updating its parameters using synthetic gradients,

we can already load batch number 2 and push it into the queue.

Then while layer 2 takes care of batch number 1, layer 1 can already take care of batch

number 2.

No need to wait!

And so on, so you get the picture.

Now each layer is working in parallel on a different batch, so all GPUs are active, they

are much less blocked waiting for other GPUs to finish their jobs.

And we can continue like this until the end of training.

As you can imagine, this can dramatically reduce training time.

However, every time we go from one layer to the next, we need to move a lot of data across

the GPU cards.

This can take a lot of time and in practice it can far outweigh the benefits of this architecture.

But if you have a deep neural network composed of, say, 30 layers then you can split it in

3 parts of 10 layers each.

You can use Synthetic Gradient models at every hidden layer, or every few hidden layers,

or just at the interfaces between the GPU cards.

With so many layers, the time required to copy the data across GPU cards is now small

compared to the total computation time, so the GPU cards spend much less time waiting

for data, and you can hope to train your network close to 3 times faster than using regular

Backpropagation on a single GPU card.

So model parallelism actually works!

Great!

Now it's time to open the black boxes and see how the Synthetic Gradient models work.

Let's focus on a hidden layer i.

It has its own Synthetic Gradient model Mi which produces synthetic gradients delta i

hat, and these synthetic gradients can be used to update the hidden layer's parameters

without waiting for the true gradients to be computed, as we have just seen.

This model can simply be a small neural network.

For example, a single linear layer, with no activation function.

Or it could have a hidden layer or two.

We will simply train the Synthetic Gradient model Mi so that it gradually learns to correctly

predict the true gradients delta i.

For this, we can just train the Synthetic Gradient model normally, by minimizing a loss

function.

We can just use regular Backpropagation here, nothing fancy.

For example, we can minimize the distance between the synthetic gradients and the true

gradients (in other words, the L2 norm of their difference), or we can minimize the

square of that distance.

But this begs the question: how do we compute the true gradients delta i?

If we need to wait for the loss function to be computed and for the true gradients to

flow backward through the network, then we have somewhat defeated the purpose of synthetic

gradients.

Fortunately, there's a neat trick to avoid this.

We can just wait for the next layer to compute its synthetic gradients delta i+1 hat and

then we just Backpropagate these synthetic gradients through layer i+1.

This does not really give us the true gradients delta i, but hopefully something pretty close.

Of course if the next layer happens to be the output layer, then we might as well compute

the true gradients and Backpropagate them.

Over time, the Synthetic Gradient models will get better and better at predicting the true

gradients, and this will be useful both for updating the parameters correctly and also

for providing accurate gradients to train the Synthetic Gradient models in lower layers.And

that's it, you now know what synthetic gradients are, how they work and how they can speed

up neural network training.

But there are a few more important things to mention.

Firstly, Synthetic Gradients can be used pretty much on any type of network, including convolutional

neural networks such as this one.

Just add Synthetic Gradient models after some hidden layers, and that's about it.

Each Synthetic Gradient model's outputs must have the same shape as its inputs, that

is the same shape as the outputs of the layer they are attached to.

For example, M1's outputs must have the same shape as the outputs of this convolutional

layer.

Suppose it's a convolutional layer with 5 feature maps of size 400x200, then that's

exactly the shape that M1 must output.

That's a 5x400x200 array.

In practice, you can use a shallow convolutional neural network that preserves the shape of

its inputs, so for example a couple convolutional layers with zero padding and stride 1 would

do just fine.

Here's another important point.

Until now, the input of each Synthetic Gradient model Mi was only the output of the corresponding

layer, hi.

But it is perfectly legal to provide additional information to the Synthetic Gradient model,

so that it can make better predictions.

For example, we can give it the labels of the current batch.

This is called a conditional Decoupled Neural Interface, or cDNI.

In the paper, the authors show that cDNI consistently performs better than regular DNI, so it should

probably be your default choice.

So in the paper, they experimented with the MNIST dataset of handwritten digits, using

various architectures and training methods.

In particular, they used this fully connected network with 3 to 6 hidden layers of 256 neurons

each.

They used Batch normalization and the ReLU activation function at each hidden layer.

And here is a graph presented in Figure 2 in the paper.

It shows the learning curves for 3 to 6 hidden layers and for various training methods.

For example, when trained using regular Backpropagation, the network reaches below 2% error on the

test set, and it gets better when you add more layers.

Using Synthetic Gradient models at each hidden layer, the final performance of the 3 layer

network ends up being better than before, but it takes time to train the synthetic models,

so overall, you know, it's a little bit longer than Backpropagation.

When you add more layers, the network's performance actually decreases, and training

time increases.

That's not great.

Note that each synthetic gradient model is actually composed of two hidden layers of

1024 neurons each, and one output layer of 256 neurons.

They also used batch normalization and the ReLU activation function in the hidden layers.

Finally, they tried training the network using conditional DNI.

The network gets better when you add more layers, and with 6 layers it actually reaches

the best performance overall.

Moreover, as you can see, this is the fastest learning architecture.

It reaches less than 2% error in just a few thousand iterations.

Surprisingly, they used very simple synthetic gradient models, without any hidden layers

here.

I am curious to know why they did not use the same synthetic models for DNI and cDNI,

because it feels like we are comparing apples and oranges.

Anyway, it clearly demonstrates that cDNI performs much better than Backpropagation

on this task, both in terms of final accuracy and training speed.

There are many more results in the paper, if you're interested, in particular great

results with Convolutional Neural Nets.

Another great application of Synthetic Gradients is in Recurrent Neural Networks.

At each time step t, a recurrent layer takes the inputs Xt, as well as its own outputs

from the previous time step h_t-1, and it produces the output h_t.

It is convenient to represent RNNs by unrolling them through time, across the horizontal axis,

like this.

First the recurrent layer takes the inputs at time t=0, and it has no previous outputs.

It then outputs h_t=0

And at the next time step, it takes the inputs X_t=1 and the previous outputs h_t=0.

To be clear, these two boxes represent the same recurrent layer at two points in time.

Then it outputs h_t=1

And we could go on and on and on…

However, during training, we have to stop at one point, or else we will run out of memory.

We can then compute the loss based on the outputs produced so far.

And we can perform Backpropagation.

And finally we can update the parameters of the recurrent layer.

This technique is called Truncated Backpropagation through time.

It works well, but it has its limits.

In particular, since we only computed the loss on a few outputs, we know nothing about

the future losses.

So in practice, this means that the network cannot learn long-term patterns.

So let's see how Synthetic Gradients can help solve this problem.

Instead of stopping at time step t=3, let's unroll the network for just one additional

time step.

But instead of using its outputs to compute the loss, we send them to a Synthetic Gradient

model.

It estimates the gradients for that time step, delta_t=4_hat.

And we backpropagate these gradients through the layer to get an estimate of delta_t=3.

We can then perform regular Backpropagation through time, by mixing the true gradients

and the estimated future gradients.

Finally, once we have all the gradients we need, we can update the parameters of the

recurrent layer by performing a gradient descent step.

We must not touch the last unrolled cell, because this would change its output h_t=4,

and we are going to need it in a minute to train the Synthetic Gradient model.So by using

Synthetic Gradients in a recurrent neural network like this, we can capture long term

patterns in the data even if we unroll the network through just a few time steps.

Now, let's see how we can train the Synthetic Gradient model.

For this, we will need to run the network on the next few time steps, so let's move

forward in time.

Okay, clean up a bit and push this to the left to have more space.

Okay, now we run the RNN on the next few time steps.

Okay, we compute the loss.

We add an extra time step and we use the Synthetic Gradient model to estimate the gradients for

that time step.

And just like earlier, we Backpropagate these synthetic gradients and we mix them with true

gradients.

And now this process gives us something pretty close to the true gradients for time step

4, and we can use these gradients to train the Synthetic Gradient model.

Next, we can use the gradients we computed to update the RNN's parameters.

And boom!

Of course we could repeat this process many times, and both the RNN and the Synthetic

Gradient model would get better and better.It does add some complexity, but you can bet

that the main Deep Learning libraries will soon hide this complexity from us, hopefully.

And if you need some motivation, here are some amazing results.

This graph is a simplified version of Figure 4 in the paper, and it comes from DeepMind's

great blog post about Synthetic Gradients, which I highly encourage you to read (the

link is in the video description below).

It shows the performance of various RNNs on the Penn Treebank task, which is a language

modelling task.

The horizontal axis shows training time, and the vertical axis shows the model's error,

measured in bits per character (BPC).

The three dashed lines are the learning curves of a regular RNN using Backpropagation through

time, unrolled through 8, 20 or 40 time steps.

So the more you unroll the RNN, the longer it takes to train, and the more data it requires,

but also the better the performance it eventually reaches.

Now compare these three dashed lines to the solid line on the left: it shows the learning

curve of an RNN trained using Backpropagation through time unrolled through just 8 time

steps, but this time using synthetic gradients.

As you can see, the model reaches the lowest error, even better than the model unrolled

through 40 time steps, and it takes roughly half as much time and data to train.

That's really impressive!

Okay next!

Yet another really interesting idea in the paper aims to break the forward lock.

Recall that the Forward lock is the fact that we need to wait for the lower layers to finish

before we can compute the top layers.

It may sound impossible to break this lock, but it is in fact quite simple: you can just

equip any layer you want with a Synthetic Input model.

For example, let's add a Synthetic Input model I3 to layer 3, which is the output layer.

It allows us to skip the hidden layers 1 and 2 by computing h2_hat, an approximation of

h2, the inputs of layer 3.

We can just feed h2_hat directly to the output layer.

And ta-da!

We've just broken the forward lock.

As you might guess, once we eventually get the output of the hidden layer 2 we can use

it to train the Synthetic Input model.

This is really the exact same idea as earlier, but going forwards rather than backwards.

In fact, we can even use the same trick as earlier to go even faster.

Instead of letting the signal propagate through the whole network to compute h2, we can just

use the synthetic input model from the previous layer and feed it to the hidden layer 2, and

this will give us something hopefully close enough to h2, to train I3, the Synthetic Input

model of layer 3.

To conclude, let's look at the data flow of a fully Decoupled Neural Interface that

uses both synthetic inputs and synthetic gradients.

First, the Synthetic Input model receives the next training batch and computes an approximation

of the layer's inputs, h_i-1_hat.

Then, the hidden layer computes its outputs h_i and feeds them simultaneously to the next

layer and to its own Synthetic Gradient model.

These gradients are backpropagated through the hidden layer, which gives a reasonably

good approximation of the true gradients for the previous layer.

The gradients delta_i-1 are just sent back to the previous layer, which will use them

to update its own Synthetic Gradients model.

And immediately after that, we can update the layer's parameters using the Synthetic

Gradients delta_i_hat.

At some point we receive the outputs of the previous layer, h_i-1, and we will use them

to train the Synthetic Input model.

And lastly, we receive the gradients from the next layer, and we use them to train the

Synthetic Gradients model.

And that's it!

The DNI is ready to handle the next training batch.

If you want to learn more about Synthetic Gradients, I encourage you to read the paper

itself, as it touches on a few more topics, such as many implementation details, or how

Synthetic Gradients can help two Recurrent Nets communicate efficiently when they don't

tick at the same rate, and so on.

Also check out the links in the video description, there are several interesting blog posts and

implementations, and I might add my own implementation at one point.

If you want to learn more about Deep Learning, check out my book Hands-On Machine Learning

with Scikit-Learn and TensorFlow.

In particular, there's a whole chapter on running TensorFlow across multiple GPUs and

servers.

There's also a german version and a French version, and I believe a Chinese version should

be out in the next few weeks.

And that's all I had for today!!

I hope you enjoyed this video and that you found it useful.

If you did, please, like, share, comment, subscribe, and you can also follow me on Twitter

if you're into that.

See you next time and I wish you a very Happy New Year!

For more infomation >> Synthetic Gradients Tutorial - How to Speed Up Deep Learning Training - Duration: 20:25.

-------------------------------------------

5 Quick Sugar Scrub Recipes For Glowing Face | How To Glow Up Overnight - Remedies One - Duration: 3:44.

getting all the benefits of a spa day without having to spend a penny sounds

like a pretty sweet deal today's video will discuss sugar scrub home recipes

before you watch this video please take a moment to subscribe our YouTube

channel by clicking the subscribe button then tap the bell icon so you will be

the first to know when we post new videos daily using a sugar scrub on your

skin is a great way to exfoliate it when mixed with the right ingredients the

scrub can add a ton of nutrients to your skin while also removing all the dirt

and impurities depending on the type of skin you have you can use any of the

following recipes for softer glowing skin in no time let's start a list off

with 5 sugar scrub recipes for glowing skin 1 the olive oil and sugar scrub the

olive oil and sugar scrub is ideal for you if you have dry skin follow these

simple steps in your scrub would be ready in no time

procedure pour a bit of olive oil in a bowl or jar and add 2 teaspoons of honey

to it the consistency can get quite thick once you've mixed the to add the

sugar you need to add enough sugar to make the mixture grainy as opposed to

watery use a cotton bud and exfoliate your skin to the coconut oil and sugar

scrub this is another sugar scrub recipe that can be used for dry skin coconut

oil douse as your skin in all the nutrients it needs in order to stay

moisturized procedure pour some coconut oil in a bowl and add some honey to it

add the sugar in and as was the case with the previous recipe ensure that the

consistency is grainy and not watery use a cotton bud to exfoliate your skin 3

the lemon and sugar scrub if you are looking for a deal of your sugar scrub

that can help you get rid of your pimples then this one is it procedure

just add the juice of one lemon to a bowl and add sugar to it

cotton in the mixture and rub it all over your face for the turmeric and

sugar scrub turmeric is great for your skin it makes it glow and also lightens

it follow these simple steps and get a glowing skin procedure put some turmeric

powder in a bowl and add a few drops of honey to turn the powder into a paste if

you don't want to use honey you can use a mild moisturizer add the sugar to this

mixture exfoliate your skin with a cotton bud 5 the aloe vera and sugar

scrub it's no secret that aloe vera is great for your skin however this is a

sugar scrub for face care that requires you to look for ingredients outside your

kitchen the best way to use this scrub is to get aloe vera gel these are easily

available and have just the right consistency to turn into a scrub put it

in a bowl and add the sugar granules then scrub away making a sugar scrub is

incredibly easy sugar has just the right texture to scrub out all the dirt and

impurities from your face and make it look super clean fresh and moisturized

go ahead and make your own scrub today which recipe do you like let me know in

our comment section below if you liked this video give it a thumbs

up and share with your friends for more daily tips subscribe to our channel

below thank you

For more infomation >> 5 Quick Sugar Scrub Recipes For Glowing Face | How To Glow Up Overnight - Remedies One - Duration: 3:44.

-------------------------------------------

Still Not Giving Up (French Cover) - Duration: 1:51.

Hey do you know the history of the odious gem war

Which began long before my birth

So many years of difficult fighting to protect the human race

It was difficult to stay in the dance

I think I have my share of mistakes still

I didn't know that Rose Quartz would make so many stories

Some wounds don't heal with my power

Oh, I feel that I wasted everything

This fabulous destiny opened my eyes for real

So many good times and many bad times

But we're all in the same boat and we're still not giving up

We're still not giving up

I'm still not giving up

Thanks for listening to me

If you also sometimes have trouble managing your emotions

I advise you to write songs

I'll gladly listen to them

I leave you friends. Kisses, bye!

For more infomation >> Still Not Giving Up (French Cover) - Duration: 1:51.

-------------------------------------------

Son of Sunnyside Coach suits up for NMSU - Duration: 1:40.

For more infomation >> Son of Sunnyside Coach suits up for NMSU - Duration: 1:40.

-------------------------------------------

Korean stocks end 2017 up 22%; analysts see another good year - Duration: 2:06.

Turning to some stock market action now.

Korea's benchmark KOSPI wrapped up 2017 with its last trading day of the year on Thursday.

Through many ups and downs,... the main bourse ended the year higher than twelve months ago.

Our Kim Mok-yeon gives us a recap of the relatively good year,.. and what the market might be

like in 2018.

Its been a prosperous year for the local stock market, as the Korea Composite Stock Price

Index, after six-years of trading within a narrow range, broke out to surpass the 2,500

mark for the first time in its history.

[STANDUP ed: steve] "For the last trading day of 2017, Korea's

main stock exchange closed at 2,467 up some 22% from the final close of the previous year,

which was just below 2,030."

Experts say the main driving forces were the IT and the health care and finance sectors.

Tech shares started the day brightly on Thursday, with Samsung Electronics moving up 3.24 percent

and number two chipmaker SK Hynix gaining 1.86 percent from the previous session's close.

Pharmaceutical shares also traded higher, with Samsung Biologics up 1.37 percent.

The rising trend is likely to stay for a while, as experts say the KOSPI 3,000 era could be

approaching.

(ENGLISH) "Over the next two years, we think that the

KOSPI will break through 3000 plus, if it goes higher maybe it may even reach 3,300.

The reason is because of the very strong earnings that's coming through right now you can look

at two digit earnings growth again in 2018 and even we think that in 2019 the earnings

growth will continue."

Korea's junior stock index, the KOSDAQ, also had a remarkable year.

The KOSDAQ closed at 798 on Thursday, continuing its upward streak for 4 consecutive days.

The index surpassed the 800 mark for the first time in ten years during trading this month,

after the government laid out measures to support small and mid-sized companies.

The local stock markets will re-open on January second at 10 a.m., one hour later than usual.

Kim Mok-yeon, Arirang News.

For more infomation >> Korean stocks end 2017 up 22%; analysts see another good year - Duration: 2:06.

-------------------------------------------

Kevmo - Running Up (ft. Joey Jewish) [CC Lyrics] - Duration: 4:52.

Hook: I wanna know, watchu know

Where you been, on the low I hit your phone, you're not alone yeah,

you're not alone I would ride for

you baby Tell a lie for you baby

Even die for you baby Lately I've been feeling crazy yeah

You tryna play me on the other side You got me runnin up

Bridge: You got me runnin baby you know

Some things I gotta let go And back in I'm lustin you know

Ima sit back let you go They just went back on you woah

You got me runnin up runnin up Somethings up you got something up yeah

Verse 1: baby got me stuntin

Pull up and I'm buzzin Came up like you nothin

Shawty wanna roll yeah Diamonds make me glow when it's not lit

Uh she called me a savage yeah Had to make sure that I had it yeah

Gotta check up get my facts in yeah Gotta check in get my tax in yeah

I wanna know, watchu know Where you been, on the low I hit your phone, you're not alone yeah, you're not alone I would ride for you baby Tell a lie for you baby Even die for you baby Lately I've been feeling crazy yeah You tryna play me on the other side You got me runnin up (x2)

Verse 2: I aint feel a vibe in a minute

I aint feel alive but I'm living Got my cup poured up and I'm sippin

Go and run it up, go and run it up I aint think I had enough, but I'm sad enough

I was down in the moment but it got me in a headcase

Woke up in a new place yeah I woke up in a new phase yeah

Are you down for the moment down for the moment Baby stick around for the moment yeah

I said you down for the moment down for the moment

Baby stick around for the moment yeah.

There's something that I wish was your question

This is a lie You're taking my life

Been in my heart, tell a [?] Need to be alone for a minute or two

We're losing touch, I don't think I know you

You're too conformed, I don't think that's gon' do

(I thought that you said that we were in love)

Yeah You're on the road, wonder where'd you go, bae I'm all alone

[Hook: Kevmo] I wanna know, watchu know Where you been, on the low I hit your phone, you're not alone yeah, you're not alone I would ride for you baby Tell a lie for you baby Even die for you baby Lately I've been feeling crazy yeah You tryna play me on the other side You got me runnin up

Runnin up, I'm runnin up Runnin up, I'm runnin up

Runnin up, got me runnin up (2X)

For more infomation >> Kevmo - Running Up (ft. Joey Jewish) [CC Lyrics] - Duration: 4:52.

-------------------------------------------

Find out What 'My 600-Lb Life' Stars Steven and Justin Are up to Today! - Duration: 2:16.

The Assanti brothers — Steven and Justin — weighed more than 1,400 pounds combined

when they were featured on TLC's hit show, My 600-lb.

Life.

But even though they were siblings, they both had very different approaches to their health

and weight loss — which is why they both had very different outcomes.

Steven, who used to eat six pizzas a day before joining the show, started his weight loss

journey at 734 pounds and has since lost 57 pounds.

The 33-year-old is arguably the most controversial of all cast members — throwing tantrums,

secretly ordering his beloved pizzas, abusing painkillers, and going off on controversial

YouTube rants.

In 2012, he filmed himself screaming, "Thank you for paying taxes because without you,

I would not have this urinal to pee in.

Without you, I would not have these cans of food to eat.

Without you, I wouldn't have pills to take to keep me alive."

WARNING: This video contains

Over Fourth of July weekend, the Massachusetts native also ranted on Twitter that Americans

have the right to "burn the flag!"

No wonder he doesn't get along well with his brother!

Justin simply told TLC cameras during the filming of their episode, "We just don't

get along."

The 27-year-old tipped the scale at 604 pounds and feared that if he continued to stay in

the house playing Monopoly and video games, he would end up like his brother.

"I know I'm gaining a lot [of weight], and I'm worried I'm going to end up like Steven,

stuck in a bed every day.

I don't want that to happen to me," he said.

Justin explained that things improved when Steven moved out, but he's afraid he will

return.

"I'm glad Steven is gone, and if he ever comes back to live here again, I don't know

if I'm going to be able to handle it."

Justin added, "Ever since we were really young, we just never have gotten along.

Our childhood was rough, and Steven just made it a lot rougher."

Today, it's unclear if the brothers are still estranged, but like Steven once said, "Pizza

will always be there."

Amen!

For more infomation >> Find out What 'My 600-Lb Life' Stars Steven and Justin Are up to Today! - Duration: 2:16.

-------------------------------------------

A Step Up Funds story - Duration: 0:46.

I'm here corporal Owens, can you tell us what we're doing today?

We're just spreading a little extra Christmas cheer.

We thought maybe you could use a little extra Christmas cheer and maybe ... use some help.

We've got some stuff here. We got a little more down in the car.

May we bring it in? [off camera a woman replies, "Yes.]

For more infomation >> A Step Up Funds story - Duration: 0:46.

-------------------------------------------

TestVideo What Up! Updates and chatting. Like for more. - Duration: 0:34.

(I didn't say bitch)

This the only way I can contact with y'all.

Because my camera can't connect yet.

Heyy.

Moody is my only subscriber but Imma get more so...

So YEAH

BYEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

For more infomation >> TestVideo What Up! Updates and chatting. Like for more. - Duration: 0:34.

-------------------------------------------

MODI DIPLOMACY AT WORK AFRICAN NATIONS LOOKING UP TO INDIA FOR ENTREPRENEURSHIP TIPS - Duration: 4:56.

For more infomation >> MODI DIPLOMACY AT WORK AFRICAN NATIONS LOOKING UP TO INDIA FOR ENTREPRENEURSHIP TIPS - Duration: 4:56.

-------------------------------------------

[RE-UP ENG SUB] 160318 STAR! B.A.P Interview - Duration: 9:46.

YG: Hello everyone, we are B.A.P Yessir! DH & HC: Hello everyone!

Bang Yongguk

Himchan

Daehyun

Youngjae

Jongup

MC: Today everyone's here for the award ceremony, how do you feel?

DH: Are you too happy?

HC: I'm speechless, i feel very honored.

YJ: Aren't you hot? HC: Yes?

YJ: You look like you're feeling hot.

HC: Weather today is very good. DH: He just came back from hunting.

HC: We are very thankful that GAON invited us to this award ceremony.

DH: We will show you guys amazing stage performance today as well.

[Hot Trend Award - Male Category]

[Hot Trend Award - Male Category] B.A.P!

ZL: Hello we are B.A.P.

ZL: First, thank you for giving us this award

ZL: Thank you to our TS Ent CEO Kim Tae Song

ZL: Thank you to TS family, staff that were always with us

ZL: We will continue to strive hard from now on to be a group that spreads KPOP

ZL: Thank you everyone.

MC: Himchan-ssi always use SNS like Weibo to communicate with China fans

MC: Do you always read fans' comments?

HC: I always read them, but since i can't understand chinese...

DH: I saw it. Himchan hyung's Weibo in China, am i right?

DH: He uploaded many photos he took with China artists

DH: If you do read comments you will realize that people leave comments for the China artists more instead of Himchan hyung

HC: So Suju's ZhouMi hyung told me to tell him if i do have any thoughts, he will translate it for me

HC: So it's always like that.

DH: Just ask our agency's staff to translate for you.

HC: I know right, why am i like that?

MC: Many fans said Daehyun is very greedy for food. Is is true?

HC: It's true. DH: I'm really a greedy eater.

DH: But I'm not the one who eats the most. HC: It's the truth.

MC: Another question. It's been a long time since you visited China. Is there any chinese food that you miss?

DH: There's always food that we'd like to eat. HC: Duck.

MC: Ahh Peking Duck.

DH: Yes that's very tasty.

MC: Don't you like mutton skewer? HC: Mutton skewer with beer.

DH: Frankly speaking I never had mutton skewer ever since i'm born

YJ: Really? DH: Not even once, really.

YJ: You're so pitiful. You should try eating some.

MC: Is he pitiful? HC: He's very pitiful.

DH: I will definitely try it during my next trip to China.

YJ: He asked me something, he asked me whether have i tried that before.

YJ: I had~ it's so tasty *professional annoyer*

DH: I got it.

MC: Next question. Zelo-ssi is getting taller, do you feel burdened?

MC: What's your height when you last measured it?

ZL: The last time i measured it was 187cm.

ZL: I don't feel burdened.

DH: I am very burdened. HC: We are very stressed.

ZL: Is that so? HC: We are always..

YJ: Recently when we take photos he always comes to my side.

YJ: It's true. ZL: I will go to that side.

[B.A.P Please Choose] Make a choice according to the given situation. Stand left if you choose A; Stand right if you choose B.

MC: Let's start with the first question.

MC: A - I really can't tolerate the habit of XXX member.

MC: B - There's no such issue.

MC: B - It's still acceptable, nothing much to complain about.

MC: It could be habits like someone barged into the toilet when you're using it.

MC: Because i really can't tolerate such habit anymore!

MC: Or "No such thing, everything is well"

YJ: It's okay. B!

DH: Me me i choose here.

MC: Ahh so cool.

ZL: I'm neutral. MC: Sorry there's no neutral.

*Sigh I'll go over there*

MC: So it's a competition between people who are complain and people who receive complaints.

MC: Spill a secret

HC: No no no. DH: Why? Why?

HC: I have no idea why is he here

[Blur]

HC: Please say something.

HC: This is so unexpected. JU: Actually it's not a big deal..

DH: Is it me? Why are you looking at me?

DH: Don't look at me.

JU: Daehyun hyung's room is originally cold,

JU: When hyung sleeps he has a small habit.

DH: What kind of habit?

JU: There's something *didn't want to embarrass his hyung*

DH: I've no idea since I've fallen asleep.

HC: Just say it out. YJ: He snores?

[Youngjae who spilled it first]

DH: It's possible to snore when a person is too tired.

YJ: It's possible but it's too serious. DH: Alright, i know, i'm sorry.

YJ: I'm always worried if Daehyun is dead. [He said he's fine but now he spilled the most]

DH: I have serious sinus problem, that's why.

YJ: Just suddenly, he looks like he can't catch his breath anymore.

DH: Didn't you say you have nothing to complain? Why are you standing there?

MC: You're right. DH: Why are you there?

HC: Come here.

MC: Those who stand there have no rights to speak. DH: Yes please keep quiet.

JU: I will stop here.

DH: Okay. JU: Watch out your nasal voice.

DH: Yes yes i will.

MC: So the next will be...

DH: Me... I'd like to complain about Himchan hyung.

HC: Is it my turn now?

HC: Me standing here... [This group is fighting against each other]

DH: Himchan hyung likes to shower a lot but

HC: *grabbed the mic* I shower frequently.

DH: Okay i know.

HC: *switching topic* There's something I want to say

HC: It happened yesterday. Yesterday when i was showering, i thought he is dead.

YJ: Why?

HC: I wanted to walk quietly but i accidentally hit the door slightly

HC: I thought Daehyun was unconscious.

YJ: It can't be. This is obviously a lie.

[Lie that's exposed in a second]

DH: He's lying. He's panic that's why he started to talk nonsense.

YJ: I was there at the moment.

*Witness* YJ: I saw it when Himchan was showering

HC: You don't have the rights to speak.

MC: People standing there have no rights to speak out.

[Pitiful Youngjae who has his right to speak taken way] HC: So what i wanted to say is Daehyun...

MC: Let's move to the next one.

MC: A - I'm addicted to mobile phone + SNS

MC: Can't live without your mobile phone.

MC: B - I can still live very well without mobile phone.

MC: To be honest i can't live without my phone lately.

MC: Me too. DH: I choose A.

DH: Everyone choose A.

HC: We are more addicted to other things.

YJ: Aren't you going to marry your mobile phone?

HC: This is not what we're addicted to.

MC: May i know what are you addicted to?

HC: For us, we prefer to communicate face to face.

DH: He's lying.

HC: Right?

MC: Please tell us the truth. DH: Can i tell the truth?

DH: Then this show can't be broadcasted anymore.

[Rated R] YJ: This is not a variety show.

DH: It isn't? So can i say?

HC: Because we are preparing for comeback recently... *switched the topic again*

HC: We are practising the choreography. DH: Right.

DH: Recently after we have ended our practice, they will stayback to continue practising.

HC: Yes. DH: This is really cool.

HC: We are addicted to dancing.

DH: For us it's because we want to have some interaction with our fans

DH: We wanted to communicate more frequently with our fans, that's why.

YJ: What is the question again? DH: Can't live without mobile phone.

DH: In order for us to keep communicating with fans, we have to be addicted.

MC: How did the topic ended up like this.

MC: Next one.

MC: A - There's a lot of members' "black history" stored in my phone.

MC: The moment you lost your phone, the world is going to end.

MC: B - There isn't many weird things stored in my phone.

MC: Ahhh friends who are here need to think carefully.

ZL: There's a lot.

HC: We're doomed. There's a lot.

YJ: I caught you when you took a photo of me last time.

ZL: Actually i do this for revenge.

HC: Ah revenge. ZL: All the weird photos of the members I took are in my notebook.

HC: Don't you have it too?

MC: Seems like you have to keep your notebook well.

DH: The moment i lose my mobile phone,

DH: B.A.P will perish.

MC: That many? DH: It's gonna be a big deal.

MC: What is those with the most minimum impact?

DH: The lightest one would be those that i took when Jongup was showering.

MC: Is that considered as the least impact? DH: Yes.

HC: I don't understand. MC: I don't understand either.

DH: I was lying down, Jongup kept singing when he took his shower

DH: Without wearing anything

DH: I opened the door and realized he's singing while being naked

HC: Under that situation...

DH: Like he's doing a skit on his own.

HC: He even paused and covered his face.

JU: Because that's our house, of course i have to be casual at our house.

DH: You're right, that's why I also casually took a picture of it.

HC: It's too funny.

MC: The last question.

MC: A - At this moment, i wanna tell China BABY "I love you"

MC: B - I Miss You.

HC: A/B just choose either one.

MC: Then you could just stand in the middle.

DH: Middle.

MC: You can say it together, I love you I miss you.

YJ: Wo Ai Ni (I love you in chinese)

DH: Get out of the way.

ZL: Wo Ai Ni.

DH: I miss you.

HC: I love you because I miss you.

HC: So romantic *narcissistic*

JU: I miss you.

[B.A.P want to tell BABYs...]

HC: We will be meeting you guys with our 5th mini album <CARNIVAL> soon.

HC: Hope everyone will give us lots of support.

HC: Very soon we will meet you during our Live On Earth concert too

HC: Hope to see you guys there. DH: I love you guys.

HC: I love you guys. ZL: Hao! (Okay in chinese)

Không có nhận xét nào:

Đăng nhận xét