Ну хорошо хоть не "вы врёте"...
Я его еще в What's eating Gilbert Grape приметила, ребёнком. Но много кто из талантливых детей быстро выходит в тираж. А он прям расцвел, особенно со зрелостью.
Ну хорошо хоть не "вы врёте"...
Посмотрите фильм где он Рембо играл
Да наизусть знаю.
Штирлицу сильно не понравилось это кино..
Он играл хорошо, но как смазливый мальчик. Таких немало. А вот хорошо играть будучи толстым и некрасивым мало кто может.
Разумеется. Я -- сугубо про потребителей этого дела. Им Иоланду подавай.
Очевидно, что и фантазии больше (он же все фантазии уже читал) и заказывать проще (он же подхалим).
Получается, знакомцы Ваши - дважды лузеры: не только вживую женщину заинтересовать не могут, но даже и виртуально..:-)
Оба женаты, так что каких-то женщин они (в своё время) уже заинтересовали.
Не в курсе таких деталей. Хотя не вижу причин делать такие выводы. Любовниц вполне заводят не вместо, а вдобавок.
я его заметила еще в Санта Барбаре, когда только началась в СССР, он там молодого Мэйсона играл. Его наверное все помнят, он на лицо не изменился в Титаники аким же был. Я его сразу узнала, очень хорошая визуальная память.
https://archive.is/ofgeWMatteo Wong @ The Atlantic писал(а): A car that accelerates instead of braking every once in a while is not ready for the road. A faucet that occasionally spits out boiling water instead of cold does not belong in your home. Working properly most of the time simply isn’t good enough for technologies that people are heavily reliant upon. And two and a half years after the launch of ChatGPT, generative AI is becoming such a technology.
[...]
For all their promise, these tools are still … janky. At the start of the AI boom, there were plenty of train wrecks—Bing’s chatbot telling a tech columnist to leave his wife, ChatGPT espousing overt racism—but these were plausibly passed off as early-stage bugs. Today, though the overall quality of generative-AI products has improved dramatically, subtle errors persist: the wrong date, incorrect math, fake books and quotes. Google Search now bombards users with AI overviews above the actual search results or a reliable Wikipedia snippet; these occasionally include such errors, a problem that Google warns about in a disclaimer beneath each overview. Facebook, Instagram, and X are awash with bots and AI-generated slop. Amazon is stuffed with AI-generated scam products. Earlier this year, Apple disabled AI-generated news alerts after the feature inaccurately summarized multiple headlines.
[...]
The reasons for generative AI’s problems are no mystery. Large language models like those that underlie ChatGPT work by predicting characters in a sequence, mapping statistical relationships between bits of text and the ideas they represent. Yet prediction, by definition, is not certainty. Chatbots are very good at producing writing that sounds convincing, but they do not make decisions according to what’s factually correct. Instead, they arrange patterns of words according to what “sounds” right. Meanwhile, these products’ internal algorithms are so large and complex that researchers cannot hope to fully understand their abilities and limitations. For all the additional protections tech companies have added to make AI more accurate, these bots can never guarantee accuracy. The embarrassing failures are a feature of AI products, and thus they are becoming features of the broader internet.
[...]
Reorienting the internet and society around imperfect and relatively untested products is not the inevitable result of scientific and technological progress—it is an active choice Silicon Valley is making, every day. That future web is one in which most people and organizations depend on AI for most tasks. This would mean an internet in which every search, set of directions, dinner recommendation, event synopsis, voicemail summary, and email is a tiny bit suspect; in which digital services that essentially worked in the 2010s are just a little bit unreliable. And while minor inconveniences for individual users may be fine, even amusing, an AI bot taking incorrect notes during a doctor visit, or generating an incorrect treatment plan, is not.
AI products could settle into a liminal zone. They may not be wrong frequently enough to be jettisoned, but they also may not be wrong rarely enough to ever be fully trusted. For now, the technology’s flaws are readily detected and corrected. But as people become more and more accustomed to AI in their life—at school, at work, at home—they may cease to notice. Already, a growing body of research correlates persistent use of AI with a drop in critical thinking; humans become reliant on AI and unwilling, perhaps unable, to verify its work. As chatbots creep into every digital crevice, they may continue to degrade the web gradually, even gently. Today’s jankiness may, by tomorrow, simply be normal.
Это из вселенной Фомы Кинаева