miércoles, 21 de junio de 2017

KAUST Team Reveals Thermodynamic Disorder In GaN-Based Nanowires



New study shows the thermodynamic entropy behaviour of InGaN/GaN nanowires



Figure 1: (a) Schematic and layer structure of the InGaN/GaN p-i-n nanowires, (b) plan-view, and (c) elevation-view SEM images of the nanowires.

While GaN-based nanowires have the potential for realising integrated optoelectronic systems because of their high quantum efficiencies, they are also expected to pave the way for new photodetector device architectures and improved photosensitivity.

GaN-based p-i-n power devices based on nanowires are suitable for attenuators, high-frequency switches, as well as photodetector applications. However, non-radiative recombination affects their performance. 

Recently, researchers from the King Abdullah University of Science and Technology (KAUST) have been looking at the operational and thermal stability of group-III-nitride nanowires to find out more about their applicability in multifunctional applications.

Led by Xiaohang Li, Iman S. Roqan, and Boon S. Ooi, the team studied the photoinduced entropy of InGaN/GaN p-i-n double-heterostructure nanowires (shown above) using temperature-dependent photoluminescence. 

They defined the photoinduced entropy as a thermodynamic quantity that represents the unavailability of a systemâ??s energy for conversion into useful work due to carrier recombination and photon emission. They have also related the change in entropy generation to the change in photocarrier dynamics in the InGaN active regions using results from time-resolved photoluminescence study. 

They hypothesised that the amount of generated randomness in the InGaN layers in the nanowires eventually increases as the temperature approaches room temperature.

To study the photoinduced entropy, the scientists have developed a mathematical model that considers the net energy exchange resulting from photoexcitation and photoluminescence. Using this approach, they observed an increasing trend in the amount of generated photoinduced entropy of the system above 250K, while below 250K, they observed an oscillatory trend in the generated entropy of the system that stabilises between 200 and 250K.

The decrease in the total recombination lifetimes with increasing temperatures reflects the fact that non-radiative recombination lifetime decreases, which the scientists attributed to the presence of surface defects on the nanowires. 

They also attributed the increase in total recombination lifetime at low temperatures to the thermal annihilation of non-radiative recombination centres, and hypothesised that non-radiative recombination due to the activation of non-radiative recombination channels contributes to the overall increasing trend in the entropy above 250K.

â??Since the entropy of a system sets an upper limit on the operational efficiency of a photoluminescent device, our study provides a qualitative description of the evolution in thermodynamic entropy generation in GaN-based nanowires,â? Nasir Alfaraj, a PhD student in Liâ??s group and the paperâ??s first author, tells Compound Semiconductor. â??Our findings will enable researchers working on developing and fabricating devices operating at various temperatures to better predict efficiency limitations.â?

The researchers plan to further investigate the photoinduced entropy in other materials and types of structure. They also plan to present comparisons between samples with varied nanowire diameters and thin films of different materials. They stressed that they are open to collaborations with researchers who are interested in similar properties of their samples.

This collaborative work, which received financial support from KAUST and King Abdulaziz City for Science and Technology (KACST) is detailed in the paper: 'Photoinduced entropy of InGaN/GaN p-i-n double-heterostructure nanowires', from Applied Physics Letters, 110 (16) (2017).

Source

lunes, 28 de marzo de 2016

James Reinders: Parallelism Has Crossed a Threshold

Tiffany Trader

Is the parallel everything era here? What happens when you can assume parallel cores? In the second half of our in-depth interview, Intel’s James Reinders discusses the eclipsing of single-core machines by their multi- and manycore counterparts and the ramifications of the democratization of parallel computing, remarking “we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade.” Other topics covered include the intentions behind OpenHPC and trends worth watching in 2016.

HPCwire: Looking back on 2015 what was important with regard to parallel computing? And going forward, what are your top project priorities for 2016?

James Reinders: I’ll start with my long-term perspective. I was reminded that it’s been about a decade now since multicore processors were introduced back in about 2004. We’ve had about a decade of going from ‘multicore processors are new’ to having them everywhere. Now we’re moving into manycore. And that has an effect I don’t think a lot of people talk about. Ten years ago, when I was teaching parallel programming and even five or six years ago, there were still enough single-core machines around that when I talked to people about adding parallelism, if they weren’t in HPC, they had to worry about single-core machines. And I can promise you that the best serial algorithm and the best parallel algorithm — assuming you can do the same thing in parallel — are usually different. And it may be subtle, but it’s usually enough of a headache that a lot of people were left (outside of HPC) having to run a conditional in their program, to say if I’m only running on single-core let’s do this and if I’m in parallel, let’s do it in parallel. And it might be as simple as if-def’ing their OpenMP or not compiling it with OpenMP, but if you wanted it to run on two machines, a single-core and a multicore, you pretty much had to have a parallel version and a non-parallel for a lot of critical things because the parallel program tended to have just a tiny bit of overhead that if run on a serial machine it would slow down.

So I ran into cases 8-10 years ago where someone would implement something in parallel and it would run 40 percent faster on a dual-core machine, but 20 percent slower on a single-core because of the little overhead. Now we don’t have to worry about this anymore, even when I go outside of HPC – HPC’s been parallel for so long – although the node level is kind of doing the same thing in this time period. Seriously, I was in an AT&T store and they were advertising that they had quad-core tablets and I was laughing. There’s also some octo-core things now in that world. I just laughed because it reinforces my point that we don’t need to worry about single-core processors anymore and that’s pretty significant in the world of programming for this next decade, that it doesn’t hold back the stack – we don’t have to program twice in any field anymore. We can just assume parallel cores. I think that that’s a big deal.

I’m a big believer in the democratization of parallel programming and HPC. I think that we keep seeing things that make it more accessible, one of them is having parallel compute everywhere, the other is advancing tools. And I think we saw a couple of things introduced in 2015 towards that democratization that combined with the fact that everything’s parallel is going to be transformative in the upcoming year and decade. A few of them are big things that I was involved with at Intel. One was that we’ve had a pretty successful foray into promoting code modernization. To be honest, I wasn’t so sure myself because I’ve been talking about parallelization so long, that I thought everyone was listening. I think there’s a lot of dialogue left to happen to truly get all of us to understand the ways to utilize parallelism. In our code modernization efforts, we’ve had things ranging from on-site trainings and events to online webinars and tools and they’ve been extraordinarily popular.

I’m also very excited about OpenHPC, and my perspective is as I visit all these different computer centers and I get the wonderful opportunity to have logons on different supercomputers around the world. I can use systems at TACC and Argonne and CSCS in Switzerland and many others, and they all solve similar problems. They all bring together, for the most part, all these different open source packages. They usually have multiple compilers on them; they have a way to allocate parts of the machine. They have a way to determine which version of GCC you’re using along with which version of the Intel compiler, etc. They all solve the same problems, but they all do it differently. There are quite a few people like myself that have logons on multiple supercomputers. So if you talk to scientists doing their work, lots of them have multiple logons and they have to learn each one, but that also means that they aren’t sharing as many BKMs [i.e., best-known methods]. There’s a lot of replication and when you look at the people that are supporting your supercomputer, I think there’s a lot of opportunity to bring more commonality in there and let the staff that you have focus on higher level concerns or newer things. So OpenHPC really excites me because it’s bringing together packages much like these centers already have, and validating them — leaving it with the flexibility that you can pick and choose, but at least giving a base-line that’s validated that they all work together — give a solution to these common problems, even have pre-built binaries.

And I’ve had the good-fortune to sit in on the community sessions, people in HPC that have been debating, and it’s interesting because there have been some pretty heated debates about what are the best way to solve some of these problems, but at the end of the day they may come up with two solutions to a problem or maybe they’ll pick one that’s best. But then they’re kind of solving it industry-wide instead of one compute center at a time, and I think that’s going to help with democratization of supercomputers of HPC. So that got off the ground in 2015, and I think 2016 will be very interesting to see how that evolves. I expect to see more people join it and I expect to see a lot of heated debates about what the best way to solve something is. But these are the sort of debates that have never really happened before because one compute center can have an argument with another computer center what the best way to do something is and they can both go off and do it differently. OpenHPC gives them opportunity for the debate to happen and then maybe stick with one solution that both centers or lots of centers evolve.

I’m also really excited that we got three Knights Landing machines deployed outside of Intel. In 2016, we’ll see that unfold and there is enormous anticipation over Knights Landing, I think it’s very well justified because I think taking this scalable manycore to a processor is going to be a remarkable transformation in the parallel computing field with a very bright future ahead of it.

HPCwire: With regard to OpenHPC, do you expect that more wary associates like IBM, which has been pushing the OpenPOWER ecosystem so strongly, would also be a member? We talked with them and they said they were looking at it for pretty much the reasons you’ve outlined. What are your thoughts about membership?

Reinders: You know I can’t speak for IBM or predict what they are going to do but I do think that the purposes of OpenHPC, the problems they’re solving, would definitely be beneficial to IBM and quite a few other companies and centers that haven’t joined yet. I think a lot of people learned about it at supercomputing so that’s not a surprise. The goals of OpenHPC is certainly to be a true open community group. So the Linux Foundation – it’s their thing, we are heavily involved obviously, and they need to come up with the governance models and so forth, but I can say that it would be an extreme  disappointment if it wasn’t open enough that everybody felt welcome enough to come participate and benefit from it. So I certainly hope to see them participate in 2016, but I think that ball is in their camp.

There’s been some dialogue or debate about whether OpenHPC is Intel’s answer to OpenPower and I don’t think that is the right way to look at it. OpenHPC has the opportunity to bring the entire industry together as opposed to be partisan to one architecture or another. Now that said, our heavy involvement in OpenHPC getting started means that we did what we do best which is our best effort at making sure that there are recipes already written up for our architecture, but hopefully they weren’t written in a way that you can’t just go write one for POWER or any other architecture. That wasn’t the goal, but frankly we’re not experts in other people’s architecture. So hopefully, what we’ve done we’ve left open enough so someone else can come in and if they want to invest effort for their own architecture, do so. It didn’t include the specification of a microprocessor in its design or anything, so it’s definitely different than OpenPower in that respect.

HPCwire: To recap, what are the top five things you are looking forward to in 2016?

Reinders: The top one to me is Knights Landing getting more available beyond the three systems that are out there. I think that’s going to be huge. Having a manycore processor instead of a coprocessor is going to fuel a lot of interesting results and debates, which I think will be great. I think OpenHPC is going to be very interesting, watching how that evolves. Nothing comes for free, so it’s going to be up to the folks that show up to the table in the community contributing, but I think that will be very significant during the year.

The other two areas I look forward to seeing evolve this year are code modernization and big data. I like seeing how we can get better and better at explaining the benefits of parallel computing to a broader set of users and to the users that you already think are doing parallel programming. I think code modernization will continue to stay on the docket as a very important dialogue. And then I think that big data, including data analytics and machine learning, will continue to see very significant developments with more nitty gritty work going on. There have been a lot of demonstrated kernels and some interesting work done, but this year the ramp-up is going to continue very fast. The interest in big data and what it can do for companies is very significant and I think we’ll continue to see a lot of things pop out there.

The other thing I’m following closely is the shift of visualization to the CPU. We’ve had some really interesting work in that area. There’s kind of been an assumption that when you’re doing visualization that having a specialty piece of silicon or GPU to do the visualization must be the answer, but it turns out GPUs are focused on the sort of visualization you need to do to display on the screen, the rasterization. A lot of visualization work is going on on supercomputers and machines where there are a lot of of benefits to not rushing the rasterization so quickly. In particular ray tracing, we’re seeing a lot of use there where ray tracing is clearly much better on the CPU, including Knights Landing. It’s been interesting watching that surprise people. There are a lot of people in the know that are doing visualization on CPUs and finding much higher performance for their purposes. I’ll go out and add that to my list since you asked for five things. I think in 2016 there will be more aha’s and realizations that visualization is increasingly becoming a CPU problem.

HPCwire: What are your thoughts on the National Strategic Computing Initiative to coordinate national efforts to pursue exascale and maximize the benefits of HPC, and should there be more investment in national research centers for software?

Reinders: I do like to point out that hardware is meaningless without software, so yes the software challenges are substantial. If I had any say in it I would encourage us to worry more about the connection of software to the domain experts rather than in a pure computer science fashion. I think there is a lot of interesting work going on in that space. If you had such centers, I would think of them as being applied science, and I think that would be an area of applied science that would be very useful.

As for the National Strategic Computing Initiative, how can I not love it? I think the fate of nations rests on their ability to harness compute power. There’s no doubt about that. Whether we want to be so dramatic to call it a battlefield, it is definitely an area competition. As an American, I’m very glad to see my country not missing that point. I just got back from India, another large democracy, and they are having very similar discussion in their country and they are rolling out their initiatives. Every country has to consider the role that computing, especially high-performance computing, plays in the competition of their nation. The US has been so long a leader in this area, I think our dialogue is about how to continue to lead the world by our own activities.

This was the second part of a two-part interview. To read the first half, where Reinders discusses the architectural trade-offs of Knights Landing’s manycore design and offers advice for expectant users, go here.

 

Source

Logran encontrar secuencias de ADN en minutos en lugar de días

Expertos del departamento de ciencias de la computación de la Universidad Carnegie Mellon han desarrollado un método para rastrear secuencias de ADN en cuestión de minutos, una mejora notable ya que antes esto requería días.
La tecnología, concebida por Carl Kingsford y Brad Solomon, del Departamento de Biología Computacional, está diseñada para encontrar secuencias de ADN y ARN mediante una nueva estructura de datos de indexación, llamada Sequence Bloom Trees, o SBT. El Instituto Nacional de Salud (NHI por sus siglas en inglés) posee una base gigantesca de datos de secuencias, en total unos tres mil billones de pares de bases que los expertos utilizan para obtener información sobre los procesos biológicos básicos relacionados a posibles curas para el cáncer.
Se necesitarían miles de discos duro para almacenar todas esas secuencias. Y buscar las deseadas, por lo habitual secuencias de 50 a 200 pares bases, entre los miles de billones, resulta una tarea titánica.
“La base de datos contiene un número incalculable de conocimientos que aún no hemos descubierto – asegura Kingsford –. El principal problema es que es muy difícil realizar una búsqueda”.
El SBT funciona del mismo modo que el catálogo de una biblioteca. En un primer nivel el sistema verifica si la secuencia de ADN está en la base de datos. Si es así, la búsqueda continúa dividiendo todo el índice en dos mitades y determinando en cuál de ellas se encuentra. Luego vuelve a dividirlo y así sucesivamente hasta que se encuentran las secuencias deseadas.
Kingsford y Solomon probaron su técnica sobre un archivo de 2.652 análisis de sangre, mama y cerebro, cada uno de los cuales a menudo contienen más de mil millones de pares de bases de ARN. El SBT completaba la tarea en unos 20 minutos promedio, un avance de enormes proporciones teniendo en cuenta que a las técnicas existentes hasta la fecha, como SRA-BLAST y STAR, les llevaría 2,2 días y 921 días, respectivamente. Por si fuera poco se pueden realizar hasta 200.000 exploraciones simultáneas. Los investigadores de diferentes áres pueden hacer uso de SBT ya que se trata de un software de código abierto.
El trabajo se ha publicado en la revista Nature

 

Fuente

domingo, 27 de julio de 2014

Analizan nanomateriales con propiedades eléctricas y magnéticas

Útiles en tecnologías médicas, electrónicas, de cómputo y celdas solares, las cerámicas y películas delgadas multiferroicas tienen varias funciones y capacidad de almacenar gran cantidad de datos en pequeños dispositivos 

En busca de nuevos materiales multiferroicos que suman propiedades eléctricas y magnéticas, un grupo de investigadores del Centro de Nanociencias y Nanotecnología (CNyN) de la UNAM ensaya, en la escala de lo pequeño, nuevas combinaciones para la tecnología del siglo XXI.

Útiles como componentes de dispositivos médicos, electrónicos, de cómputo y en celdas solares, los multiferroicos son materiales de amplia aplicación industrial, pero tienen elementos contaminantes como el plomo.

“Uno de los retos es sustituir paulatinamente los componentes que dañan al medio ambiente con nuevas combinaciones, pero sin perder eficiencia”, planteó Jesús María Siqueiros Beltrones, investigador del CNyN campus Ensenada de esta casa de estudios.

Eléctricos y magnéticos

La ferroelectricidad es la capacidad de algunos materiales para guardar información en su estructura cristalina, sin necesidad de conectarlos a una fuente de energía como la corriente eléctrica o las baterías. Los datos se almacenan por la polarización eléctrica, que puede ser activada externamente por un voltaje y persistir aunque éste sea retirado.

En tanto, el ferromagnetismo representa un comportamiento similar, excepto que aquí hablamos de polarización magnética y dipolos magnéticos; no obstante el origen de la ferroelectricidad y el magnetismo es diferente.

Habría un tercer fenómeno de este tipo, la ferroelasticidad, que refiere las deformaciones espontáneas del material.

Se le llama multiferroico a un material que en su comportamiento presente al menos dos de estas tres propiedades. Siqueiros Beltrones explicó que los materiales con esta doble capacidad pueden construirse en forma de cerámica (pastillas o tabletas) o de película delgada (capas con espesores que van de algunos hasta 500 nanómetros).

“Como cerámicos dominan en cierto tipo de aplicaciones médicas. Por ejemplo, el titanato o circonato de plomo (PZT, que convierte la energía mecánica en electricidad y viceversa) sirve para producir fuentes de ultrasonido –con éste se hacen estudios diagnósticos–, también funciona como sensor de ultrasonido para sonares y otros equipos marinos”, detalló.

En tanto, como películas delgadas se usan en microelectrónica de la computación. “Por sus propiedades, se aprovechan para construir memorias de computadora en diferentes formas. Los ferroeléctricos presentan, incluso en ausencia de un campo eléctrico, una polarización eléctrica; esta última se puede invertir, lo que permite crear dispositivos de cómputo, pues define dos estados estables (cero y uno) y con eso es factible construir el álgebra en la que se basa la computación”, resumió.

Las películas delgadas de materiales ferroeléctricos tienen, en general, una constante dieléctrica muy alta, lo que ayuda a desarrollar condensadores pequeños, pero de gran capacidad.

“Al habilitar en estos materiales la propiedad magnética, surge la posibilidad de construir memoria de computadora que en vez de dos estados estables tenga cuatro, pues el campo eléctrico puede tener dos orientaciones, el campo magnético otras dos, y es posible combinar ambos fenómenos, lo que amplía la capacidad de la memoria. Es algo que se investiga en la actualidad”, señaló.

Moléculas de un material a otro

Entre sus experimentos, los científicos universitarios utilizan la ablación láser, un método que, tras bombardear con haces de luz láser de alta potencia un material, desprenden de éste átomos, moléculas y partículas que se depositan sobre un sustrato o sobre otra película delgada de un material diferente, para obtener un sistema con nuevas capacidades.

Para sustituir al plomo, se ha experimentado con titanato de bario, considerado el material piezoeléctrico por excelencia, que tuvo auge tras la Segunda Guerra Mundial. “Sigue presente, pero no se ha establecido como material definitivo. Se usa mucho para condensadores, pero no tanto para memoria”, indicó.
El físico y sus colegas prueban opciones no contaminantes a partir de materiales cerámicos como niobio, potasio y sodio, llamados genéricamente KNN.

“Bajo ciertas condiciones especiales comienzan a dar propiedades. A los KNN les agregamos elementos de tierras raras como lantano, y otras como litio y tantalio; así, hemos logrado mejorar algunas propiedades. Estamos en el proceso de integrar a ese material ‘impurificaciones’ que van del 0.5 al tres por ciento atómico”, precisó.

Hasta ahora, prosiguió, el desarrollo más prometedor es el KNN, aunque enfrentan el problema de que los compuestos de sodio y potasio son higroscópicos (les gusta el agua), así que se debe cuidar que el material no absorba humedad, que esté encapsulado o aislado, lo que traduce el inconveniente tecnológico en uno económico, pues se requiere un proceso adicional en la fabricación.

Otro material con el que experimentan en el CNyN es el PFN, un óxido de plomo, fierro y niobio, que tiene comportamiento ferroeléctrico y magnético. “Lo más importante es la interacción entre ambos, pues esto permite, en aplicaciones en cómputo, grabar magnéticamente y leer eléctricamente un proceso que es energéticamente muy eficiente”, concluyó.

 

 

Fuente

miércoles, 11 de junio de 2014

Google, al rescate de tres mil lenguas en riesgo de desaparecer

La mecánica consiste en registrar muestras de lenguas en riesgo, acceder a ellas y compartirlas, así como investigar sobre estas lenguas y ofrecer consejos y sugerencias

El gigante de Internet Google se ha lanzado en una nueva aventura: busca rescatar tres mil 54 idiomas en peligro de desaparecer.

Para lograr su cometido a lanzado la página Web www.endangeredlanguages.com,
mediante la cual busca concientizar a los ciudadanos del mundo sobre la importancia de preservar las lenguas que parecen destinadas a desaparecer.

Así, la compañía de Silicon Valley pretende salvaguardar idiomas tan raros como el Koro, que se habla en las montañas del noreste de India; el Ojibwa, hablado por etnias de del norte de Estados Unidos y el Aragonés, originario de una zona de España y con apenas 10 mil hablantes.

De acuerdo con el proyecto, expertos calculan que para el año 2100 sólo se hablará el 50% de las lenguas que siguen vivas en la actualidad.

La mecánica consiste en registrar muestras de idiomas en riesgo, acceder a ellas y compartirlas, así como investigar sobre estas lenguas y ofrecer consejos y sugerencias a quienes trabajan en la documentación y protección de los idiomas amenazados.

"Con el proyecto Idiomas en peligro de extinción, Google pone su tecnología al servicio de las organizaciones y de los individuos que trabajan para hacer frente a la amenaza de las lenguas mediante su documentación, su preservación y su enseñanza", señala su página de Internet.

Aunque Google ha supervisado el desarrollo y el lanzamiento de este proyecto, el objetivo a largo plazo es que sea liderado por auténticos expertos en el campo de la conservación de las lenguas.

 

Fuente

 

 

domingo, 16 de febrero de 2014

Versos para captar al pueblo

José Luis Torrego, en la redacción de El Norte. / A. Tanarro

 

Decía Pablo Neruda en la Oda a la Crítica que la poesía debe nacer sin concesiones estéticas pero tan llena de sentir que el pueblo la quiera hacer suya. Esa reflexión es la que inspira a José Luis Torrego, autor de 'Levantas los párpados y amanece' (Ediciones Vitruvio), quien defiende que «todo el mundo lleva dentro el germen de la poesía, a pesar de que actualmente la gente está muy alejada de ella».

Este profesor segoviano, que da clases de Inglés y Alemán en el colegio Maristas de Madrid y que colabora en la Escuela Universitaria Cardenal Cisneros de Alcalá de Henares, lleva vendidos desde febrero más de 300 ejemplares de la que constituye su primera publicación hasta el momento. Se trata de una recopilación de poemas amatorios. Es un libro, dice Torrego, «reposado, sincero, sentido, no pensado para publicarse, que provoca reminiscencias y que a la gente le despierta algo que ya han sentido». El objetivo suyo más íntimo es lograr aficionar a la poesía a sus lectores. Dice que ya lo ha conseguido con algunos de ellos y que ese logro es lo que mayor satisfacción le ha proporcionado con esta publicación.

Torrego tiene claro por qué eligió esta orientación para su poemario. «La poesía no es como la novela. Uno se propone escribir una novela, se pone tres horas al día y cuenta una historia. Yo entiendo que la poesía te rapta y te hace que la escribas. A mí lo que me inspira es la poesía amatoria, aunque tengo algún poema que no es de este tipo. La poesía tiene que ser sobre lo que te conmueve y lo que más conmueve en este género son los sentimientos, los abismos y los éxtasis del amor», argumenta este segoviano.

Una de las poesías incluidas en la obra versa precisamente sobre su ciudad de nacimiento y con ella ha conseguido despertar en algunos de sus lectores –pues así se lo han comentado– la sensación de encontrarse en Segovia gracias a la lectura de sus versos.

Enseñanza

Esa identificación con la poesía en general es más difícil de encontrar en los colegios porque, opina, prevalecen más los datos que la esencia de los autores. «Son más importantes los años de nacimiento o los títulos de las obras que leer un poema y analizarlo para entender la esencia de ese escritor», afirma. Considera que esa circunstancia es perjudicial para acercar la poesía a los más jóvenes: «Eso tira para atrás. Si para ti Salinas es el tío del que me tuve que aprender dos folios, lo odias. En cambio si lo lees, es otra cosa, pero no se hace».

Pedro Salinas, poeta de la Generación del 27, es uno de los referentes de este autor segoviano. En sus creaciones se ha dejado guiar por «la búsqueda de Salinas de la claridad en los laberintos del tú y el yo», la transformación de la poesía como destreza «en un desnudarse humano» que protagonizó Garcilaso de la Vega, o la distinción realizada por Gustavo Adolfo Bécquer «entre poesía adorno y la que nace en el espíritu».

 

 

Fuente

miércoles, 24 de abril de 2013

Galán juvenil de Chilevisión se sumará a teleserie vespertina de Canal 13

Poco a poco, nuevos rostros de la TV chilena se suman a la próxima teleserie vespertina de Canal 13, Mamá Mechona.
La telenovela, que preliminarmente será protagonizada por Sigrid Alegría en el papel de una madre cuarentona que busca ingresar a la universidad , será el caballito de batalla del canal para reinstalarse en la franja de las 20:00 horas.
En este contexto, se informó que además de Katyna Huberman se incorporará al proyecto el ex actor juvenil de Chilevisión, Jaime Artus, quien recientemente llegó a acuerdo con la estación.
Cabe recordar que Artus alcanzó fama tras participar en Yingo y en producciones del programa, como Vampiras y Gordis.
Por el momento, destacó Terra, el actor forma parte del espacio Planeta Comedia de Vía X, a la espera de que inicie su participación en la teleserie de la estación católica.