Winter Dec 25 - Jan 26 Direct Incentives

They still have Dream it Forward. But buyers don’t get anyting for it. Only membershers which is why we close the thread.
Why close it? Everyone “would love to be a referral”. Are you saying they wouldn’t without getting compensated?
 
For sure!

It understands the general DVC framework (resale restrictions, Riviera resale limitations, membership extras, etc.), but it only applies them correctly if the inputs are clear.

- If you say “150 Riviera points purchased direct”, it will correctly treat those as unrestricted.

- If you say “Riviera resale”, it will apply the post-2019 resale restriction (Riviera-only stays).

- For legacy resorts (AKV, BLT, CCV, etc.), it knows resale vs direct is functionally the same for booking, but different for perks.

It won’t assume restrictions unless you explicitly tell it how the points were acquired. The quality of the answer depends heavily on how the question is framed.
I asked ChatGPT for the best Boardwalk view grand villa rooms and it proceeds to give me room numbers that were not Grand Villas.

I then called it out on its mistake and it apologized and proceeded to tell me that there are no Boardwalk view grand villas….

I once again told it that they were wrong that they were only Boardwalk view rooms and then it proceeded to give me room numbers that were not Grand Villas again…

So… those language models can just confidently make stuff up and if it’s not a subject you have some knowledge on then you wouldn’t know any better….
 

I asked ChatGPT for the best Boardwalk view grand villa rooms and it proceeds to give me room numbers that were not Grand Villas.

I then called it out on its mistake and it apologized and proceeded to tell me that there are no Boardwalk view grand villas….

I once again told it that they were wrong that they were only Boardwalk view rooms and then it proceeded to give me room numbers that were not Grand Villas again…

So… those language models can just confidently make stuff up and if it’s not a subject you have some knowledge on then you wouldn’t know any better….
That's hilarious and I'm not surprised.

You’re 100% right to call it out. If someone doesn’t already have baseline DVC knowledge, a LLM can sound very confident while being subtly (or not so subtly) wrong. The trick is using it as a starting point, not an authority - especially with DVC minutiae.

I use LLMs a great deal for work activities, not so much recreationally. But I do generally like the results.
 
I asked ChatGPT for the best Boardwalk view grand villa rooms and it proceeds to give me room numbers that were not Grand Villas.

I then called it out on its mistake and it apologized and proceeded to tell me that there are no Boardwalk view grand villas….

I once again told it that they were wrong that they were only Boardwalk view rooms and then it proceeded to give me room numbers that were not Grand Villas again…

So… those language models can just confidently make stuff up and if it’s not a subject you have some knowledge on then you wouldn’t know any better….

That is called hallucination and is a big problem with LLMs. I worry about people asking LLMs about important things like mental health or what have you, and the LLM hallucinates.
 
BLT resale would be what I would do. The spread for BLT direct is rather large. How many points were you considering? Direct *could* make sense if you are needing 50 or less and use a decent credit card offer to buy it. Maybe BLT will get an incentive come February, but the base price could also go up.
I need to map out my needs a bit better, but at BLT I’m thinking between 50-100 should be enough. The problem with resale is that there’s only one contract right now, with 200pts. That’ll cost me significantly more in the long run.
It seems you are not booking soon. I feel you do have the 2 months for the resale process.
My 11mo window opens on Jan 4th, so ideally I’d like to book it then.

What sort of accommodations do you usually stay in at BLT? I know you said you're looking for points for your 2026 Christmas trip, but is that a typical time of year you'll use your points or is this a one off thing?
We’re usually 1 bedroom people, but for this short trip we’d book a studio assuming availability. Plan is to make this a yearly trip.
I ask for a couple of reasons. One, preferred/lake view rooms at BLT tend to be pretty easy much of the year to book at the 7-month mark. And, resort view rooms, if you own there, can be a little difficult well before the 7-month mark because there are so few of them. But, December is a tough year at any resort unless you own there, so if regular Christmas stays are in the plan, owning there probably does make more sense than purchasing points elsewhere to use there. That said, even if you got direct BLT points loaded to your account today, my guess is that the resort view rooms are already being walked for December 2026. But, you'd have a better chance if you got them loaded now than if you need to wait for the resale process.

That said, I'm with @VGCgroupie and @PlutoNotPlanet, BLT direct at $275/point with no incentives whatsoever is a pretty steep price to pay. I guess if all you need is 25 or 50 points, maybe?
Like I said, logically I know BLT direct is crazy bananas. But I still can’t help but want it?

I’m going to make a spreadsheet and see what the numbers say.
 
That is called hallucination and is a big problem with LLMs. I worry about people asking LLMs about important things like mental health or what have you, and the LLM hallucinates.
A bit off topic but adding this as people here are using it. LLMs are great tools, but people are using it with the belief that they “know” what they are answering.

It’s good to be aware of this:

LLMs don’t know facts. An LLM knows what facts look like when written.

That’s why they can confidently say something wrong, hallucinate and sound persuasive while being incorrect.

It doesn’t know what they’re answering. But based on their training, the answer looks good for your question. It doesn’t pause and think what it just answered. If the answer is wrong and it looks plausible, fits the tone and matches the pattern of the conversation, it’ll happily keep going. And when you point it out it’ll say “you’re absolutely right!” (Even if you’re also wrong 🤣).
 
A bit off topic but adding this as people here are using it. LLMs are great tools, but people are using it with the belief that they “know” what they are answering.

It’s good to be aware of this:

LLMs don’t know facts. An LLM knows what facts look like when written.

That’s why they can confidently say something wrong, hallucinate and sound persuasive while being incorrect.

It doesn’t know what they’re answering. But based on their training, the answer looks good for your question. It doesn’t pause and think what it just answered. If the answer is wrong and it looks plausible, fits the tone and matches the pattern of the conversation, it’ll happily keep going. And when you point it out it’ll say “you’re absolutely right!” (Even if you’re also wrong 🤣).
Well stated!

LLMs are powerful assistants, not arbiters of truth. Informed users need to do a sanity check of the answers. Blind trust is the real risk.
 
So… those language models can just confidently make stuff up and if it’s not a subject you have some knowledge on then you wouldn’t know any better….
Exactly. This is why I call them the Probabilistic Plagiarism Machines.

I let my students use The Internet during my exams. They can look up anything they want, but are told not to ask questions of anything that claims to be intelligent, human or otherwise. I am sure some do anyway, which is fine, because the current models (even the ones trained on a Big Pile of Computer Science) tend to get solid Ds on my exams.
 
The problem with resale is that there’s only one contract right now, with 200pts.
Can't talk about currently listed contracts since it's the rule. But there are quite a few on the market including the board sponsor with a variety of UY. Many are stripped but not all. You can use tools to check more brokers
 
Last edited:
Exactly. This is why I call them the Probabilistic Plagiarism Machines.

I let my students use The Internet during my exams. They can look up anything they want, but are told not to ask questions of anything that claims to be intelligent, human or otherwise. I am sure some do anyway, which is fine, because the current models (even the ones trained on a Big Pile of Computer Science) tend to get solid Ds on my exams.
You mean not everything on the Internet is TRUE or accurate?? Oh my......!
 
I asked ChatGPT for the best Boardwalk view grand villa rooms and it proceeds to give me room numbers that were not Grand Villas.
That is called hallucination and is a big problem with LLMs.
Exactly. This is why I call them the Probabilistic Plagiarism Machines.

I let my students use The Internet during my exams. They can look up anything they want, but are told not to ask questions of anything that claims to be intelligent, human or otherwise. I am sure some do anyway, which is fine, because the current models (even the ones trained on a Big Pile of Computer Science) tend to get solid Ds on my exams.
Jumping into the rabbit hole before mod says stop!

The current top problem is not summarization or retrieval, it's more on the searching side. For those well-known questions, it answers pretty well. But when comes to the details, it fetches irrelevant results and gives you answers anyway
 
Jumping into the rabbit hole before mod says stop!

The current top problem is not summarization or retrieval, it's more on the searching side. For those well-known questions, it answers pretty well. But when comes to the details, it fetches irrelevant results and gives you answers anyway
That's fair criticism. Hallucinations are real, and treating LLM output as authoritative is a mistake, especially for precise or niche details. I don’t think anyone serious about the technology would argue otherwise.

That said, I’m not sure the right reaction is to be scared of or dismiss the technology outright. We’ve seen this pattern before with search engines and the internet itself. Early misuse and overtrust led to backlash, but we've kinda figured it out. And we will figure this out as well.

These models are tools, not oracles. Used for exploration, synthesis, and idea generation. Paired with human judgment and verification they’re already useful. Imperfect, absolutely. The problem isn't that the technology exists, it’s that people stop thinking.

If anything, this just reinforces the need to teach how to use these tools critically, not pretend they don’t exist or ban them outright. Ignoring them won’t make them go away, and it certainly won’t prepare students or professionals to use them responsibly.

I'm not here to sway anyone's opinion of the technology or tell someone what to do or not to do with it. I'm not in the providing education to others department, minus my kids of course :-)

Ok, I'm off my soapbox. I realize this went off topic. My apologies and I'm moving on!
 
Can't talk about currently listed contracts since it's the rule. But there are quite a few on the market including the board sponsor with a variety of UY. Many are stripped but not all. You can use tools to check more brokers
Sorry, I should've clarified that I meant in my UY. I'm not interested in getting a different UY.
 
I need to map out my needs a bit better, but at BLT I’m thinking between 50-100 should be enough. The problem with resale is that there’s only one contract right now, with 200pts. That’ll cost me significantly more in the long run.

My 11mo window opens on Jan 4th, so ideally I’d like to book it then.


We’re usually 1 bedroom people, but for this short trip we’d book a studio assuming availability. Plan is to make this a yearly trip.

Like I said, logically I know BLT direct is crazy bananas. But I still can’t help but want it?

I’m going to make a spreadsheet and see what the numbers say.
The spreadsheet is going to say you are crazy 🍌 🤣
 
That's fair criticism. Hallucinations are real, and treating LLM output as authoritative is a mistake, especially for precise or niche details. I don’t think anyone serious about the technology would argue otherwise.

That said, I’m not sure the right reaction is to be scared of or dismiss the technology outright. We’ve seen this pattern before with search engines and the internet itself. Early misuse and overtrust led to backlash, but we've kinda figured it out. And we will figure this out as well.

These models are tools, not oracles. Used for exploration, synthesis, and idea generation. Paired with human judgment and verification they’re already useful. Imperfect, absolutely. The problem isn't that the technology exists, it’s that people stop thinking.

If anything, this just reinforces the need to teach how to use these tools critically, not pretend they don’t exist or ban them outright. Ignoring them won’t make them go away, and it certainly won’t prepare students or professionals to use them responsibly.

I'm not here to sway anyone's opinion of the technology or tell someone what to do or not to do with it. I'm not in the providing education to others department, minus my kids of course :-)

Ok, I'm off my soapbox. I realize this went off topic. My apologies and I'm moving on!
For the record, I use co-pilot frequently at work to create a CRM note, draft a meeting recap email, or even just to make sure that my tone isn’t to harsh on internal TEAMS or emails.

I have also uploaded pdf’s to ChatGPT that contain DVC point chart from various years or various resorts and asked it to analyze the information and let me know any material changes or which times of year may be better to visit Resort A vs Resort B from a point chart perspective. I feel that it did a solid job at this.

So… I am definitely on board with optimizing my productivity through AI so that I am above the 30%+ white collar workforce reduction buzz saw that may be coming society’s way… but I have also learned where some of its current blind spots are.
 
Incidentially, it looks like they bumped up the "we trust you" interest rate from 9% to 9.5%. Maybe this is the wrong set of charts and there is another one with a lower rate....
I noticed that as well--and thought it was strange as the fed rate is down (I know these rates aren't linked, but as this type of interest tends to be forward looking in terms of the financial markets, I'm confused, unless this, too, is a way to squeeze a little more nearly invisible profit from customers, which it probably is).
 
The spreadsheet is going to say you are crazy 🍌 🤣
My husband said, and I quote: "to hell with the price" so who's the real crazy one here? 😂

Spreadsheet (for simplicity of math I assumed no increase in annual dues):
1766170040395.png
My husband asked me to add 100pts at BLT, but that's a nonstarter. 50pts would suffice, but am I going to be in this same place in a couple of years? Then I'm comparing 80pts at BLT and 100pts at CCV. I'm really not sure what to do now.
 
My husband said, and I quote: "to hell with the price" so who's the real crazy one here? 😂

Spreadsheet (for simplicity of math I assumed no increase in annual dues):
View attachment 1032432
My husband asked me to add 100pts at BLT, but that's a nonstarter. 50pts would suffice, but am I going to be in this same place in a couple of years? Then I'm comparing 80pts at BLT and 100pts at CCV. I'm really not sure what to do now.
Why isn’t 100 points at BLT (the 3rd option) the best option?
 











DIS Facebook DIS youtube DIS Instagram DIS Pinterest DIS Tiktok DIS Twitter

Back
Top Bottom